← Back to stories Aerial shot of fish farm structures in Hong Kong water reflecting a quiet, sunny day.
Photo by Matthew Yeung on Pexels
虎嗅 2026-03-27

What exactly are we raising when farming lobsters?

The real work is shaping identity, not just adding skills

A popular essay on Huxiu (虎嗅) argues that the recent rush to "raise lobsters" — China’s colorful shorthand for customizing large-language-model agents — is being misunderstood. The piece says the immediate problem with today’s large models is not capability; it’s lack of identity. They know a lot, but they don't know you. Reportedly some teams are burning through roughly 2 billion tokens a month experimenting with agents. But the author’s point is sharper: the goal shouldn’t be a generic 80-point assistant, it should be a system that starts to think and judge like you.

Treat an agent like a new hire

The essay lays out a practical, iterative approach: three living documents (SOUL.md for role and temperament, AGENTS.md for rules and boundaries, IDENTITY.md/USER.md for external voice and personal preferences). The analogy is telling — drop a raw model into your workflow and it behaves like an un-onboarded employee: enthusiastic but noisy and unfocused. What the author prescribes is not a one-time prompt fix, but continuous feedback loops: codify what went wrong, refine the boundaries, surface the implicit preferences you didn’t even know you had.

Skills are shortcuts, not soul

“Skills” (modular tool-packages) get the most attention because their effects are obvious — add search, monitoring or publishing and the agent can do those tasks. But skills alone don’t create judgement. The Huxiu piece gives a concrete example: a news brief that evolved from raw aggregation into a decision-focused, prioritised advisory after a few rounds of tuning. Crucially, the agent learns the explicable parts of the author’s judgment quickly; it cannot absorb tacit intuition that the human can’t yet describe.

Why this matters beyond China

For Western readers: this mirrors global conversations about AutoGPT-style agents and plugins, but it plays out under different constraints. U.S. export controls on advanced GPUs and trade frictions have pushed Chinese teams to be frugal and systems-oriented — you optimize tokens and institutionalize judgment rather than brute-force scale. So is “raising lobsters” just a local meme? Not at all. It’s an operational answer to a universal problem: how do you turn a powerful, general-purpose model into a repeatable, evolving member of a team that reflects human standards rather than merely echoing them?

AI
View original source →