Claude Code publishes internal retrospective on Skills: 5 common traits of a good Skill
What Anthropic shared
Anthropic engineer Thariq published a long public thread titled "Lessons from Building Claude Code: How We Use Skills," and Chinese tech outlet Huxiu (虎嗅) ran a summary based on a WeChat piece by 夕小瑶科技说's 丸美小沐. It has been reported that Anthropic already has hundreds of Skills in active use inside Claude Code, and the thread lays out practical takeaways from that experience. The key revelation: the highest-value content in a Skill is not the manual-like facts the model already knows, but the team-specific "gotchas" and operational rules it cannot learn from public documentation.
Five practical traits — focus on high‑context rules, triggers and memory
Thariq's core point is simple and counterintuitive. What matters most are high‑context, local rules: approvals that are "optional" on paper but required in practice; rate limits that blow up at specific concurrency; workflows and acceptance criteria that live in oral tradition. Why? Because the model's pretraining already covers low‑context knowledge such as API signatures and general call flows. A Skill should surface what the model cannot infer from the public web. Reportedly, Thariq even summarized it bluntly: "the densest information in a Skill is the summary of pitfalls."
He also emphasised metadata and state. The Skill description is not a human label — it's the trigger the model reads when deciding whether to activate a Skill, so vague descriptions make Skills either miss or overextend. And persistence matters: simple logs, structured JSON, or a SQLite file that records previous runs turn a Skill from "first‑time" help into a continuous assistant that remembers past outputs (for example, a standup-post Skill that knows yesterday's notes and highlights real deltas today).
Why this matters beyond Anthropic
For Western readers unfamiliar with China's fast‑moving AI scene, the lesson is transferable: as organizations build internal tool ecosystems, codifying tacit, high‑context knowledge into machine‑readable Skills becomes a competitive advantage. Geopolitics and trade policy are already nudging many firms to develop in‑house stacks or locally hosted deployments, which increases the value of compact, team‑specific operational knowledge encoded as Skills. In short: don’t teach the model what it can already read; teach it what only your team knows.
