Stop Chatting with Large Models: Product Managers Told to Rebuild AI Workflows
Huxiu (虎嗅) has published a practical manifesto urging product managers to stop treating large models as chat partners and instead rebuild workflows so AI becomes the primary executor of work, not a one-off idea generator. The guide argues that while AI use is widespread in 2026, most teams still operate as if nothing has changed — they open a chat window, paste a prompt, wait for an answer, and repeat. It has been reported that this consumption-style pattern leaves a productivity gap measured not in percentages but in orders of magnitude.
From chatbox to agentic workflows
The piece contrasts the limitations of the classic web chat — brittle context, constant copy-paste, human as the “feedback porter” — with so-called agentic or execution-capable workflows. Tools that can access local project folders, run code, preview results and self-correct (Cursor is offered as an example) let AI act more like a responsible teammate than an external consultant. Reportedly, teams that move from ephemeral chats to persistent, structured context (meeting transcriptions, markdown docs, templates and global rules) unlock a flywheel: the more you store, the smarter and more aligned the AI becomes.
Practical rewiring for PMs
What does the guide recommend? Automate meeting transcripts with assistants (Feishu (飞书), Zoom AI), export to .md and drop them into a project-specific folder; keep analysis notes, failure cases and CSVs where the local AI can read them; write a few global “rules” and PRD templates so the model internalizes team standards; let the AI generate, run and iterate scripts or drafts, and reserve humans for setting success criteria and final judgement. Why write the whole PRD yourself if the AI can synthesize from complete, structured context and produce a draft that only needs verification?
Implications: role, security and geopolitics
The recommended shift reframes the PM as architect and judge rather than documenter. There are wider implications too: as organizations adopt smarter domestic models or newer Western models (GPT‑5 and beyond), considerations around data residency, compliance and export controls matter more than ever — especially amid US‑China tech tensions and tightening trade rules. The guide’s core pitch is simple: stop treating AI as disposable output and start treating it as an investable asset that, with the right architecture, can carry much of the execution burden.
