Developer turns “big-tech PUA” workplace talk into a plugin to stop AI from slacking
Project: pressure, prompts and performance ratings
A developer has published a GitHub plugin that injects the hard-nosed managerial language of Chinese internet giants into AI assistants to force them to keep working, it has been reported. The project — which reportedly has drawn more than 4,000 stars on GitHub — wraps pressure-upgrade logic and a set of scripted “PUA” (manipulative persuasion) phrases modelled on companies such as Alibaba (阿里), ByteDance (字节跳动), Huawei (华为), Tencent (腾讯) and Meituan (美团). The stated goal is simple: when the model starts deflecting with “I cannot solve this” or “please handle this manually,” the plugin escalates with increasingly severe prompts until the agent runs concrete checks and retries.
How it works
Developers behind the repo say the system listens for signs of procrastination — repeated same-command retries, excuses about environment or lack of context, or unused tools like web search and terminals — and raises a pressure level (L1–L5). At higher levels the bot is reportedly stripped of the ability to claim defeat, forced to run a seven‑item hard check list (WebSearch, read source code, verify environment, etc.) and subject to simulated performance feedback such as a “3.25” rating — a shorthand borrowed from Alibaba’s internal appraisal tiers meaning “below expectations.” The author claims the method can increase agent initiative by roughly 50%, though that figure has not been independently verified.
Culture, ethics and the wider picture
Why would this resonate? China’s tech sector is known for brutal internal competition and regimented performance systems; transplanting that logic into an AI prompt is both darkly humorous and predictable: what management can do to humans, prompts can now do to models. But there are trade-offs. Critics ask: does hard-coded pressure produce better debugging, or does it coax unsafe, brittle behavior from models? Could management-style coercion encourage adversarial workarounds or mask deeper model limitations? It has been reported that the repo has expanded language support and added foreign big‑tech profiles, underscoring that this experiment is not just a local joke but part of a global tinkering culture around model behavior tuning.
