← Back to stories Close-up of a smartphone displaying an AI chat interface, ready for interaction.
Photo by Abdelrahman Ahmed on Pexels
虎嗅 2026-03-20

Xiaohongshu (小红书) Sets Its Sights on 'AI Version of Zhengzhou Gang'

Platform crackdown, but the problem predates AI

Xiaohongshu (小红书) said on March 10 it will strictly target “AI-managed accounts,” banning any technology that simulates real people to create or interact with non‑authentic content. The move echoes the app’s long-standing pitch: a discovery and social‑commerce space where “real people” share genuine experiences — a key reason millions of young Chinese treat it like a cross between Instagram, Pinterest and a product review marketplace. It has been reported that the announcement was quickly linked to OpenClaw (nicknamed “龙虾”), an open‑source AI agent that can reportedly generate posts, publish notes and even mimic interactions in comments and private messages.

Old factories, new tools

The deeper target is not only agents like OpenClaw but the content factories known in industry slang as the Zhengzhou Gang (郑州帮). Their method is simple: harvest hot posts, deconstruct headlines and covers, reassemble elements and flood the platform with near‑identical, monetizable notes. It has been reported that operators such as Liu Qing said they are unfazed — they have adapted before and can shift from full automation to “semi‑managed” workflows where AI drafts are still polished and published under human control. The industry now distinguishes three tiers: AI as assist, AI as partial manager, and full AI takeover — platforms can more easily spot the latter but struggle with humans acting like machines.

A policy and product paradox

Xiaohongshu faces a practical paradox. On one hand it is promoting “human‑feeling AI” tools such as an in‑app assistant; on the other it is banning external agents that hollow out community trust. It has been reported that the company has been testing its own OpenStoryline editor and open‑sourcing image‑edit models that can automate idea generation, layout and copy. So what is forbidden — AI that helps a creator, or AI that replaces the creator? This dilemma plays out against Beijing’s wider push to tighten platform and content governance: regulators expect platforms to curb manipulation and illicit monetization even as they race to deploy domestic AI capabilities.

Enforcement, adaptation and the authenticity question

Tightening rules will shift behavior, not necessarily eliminate industrialized content production. When automation is detectable, platforms can act. When humans adopt machine‑like workflows, enforcement becomes a judgment call about intent, craft and commercial motive. Xiaohongshu has signaled it will choose “protecting the human feel” over untrammelled AI efficiency — but can any platform define that line clearly enough to stop operators who simply change tactics? The coming weeks will test whether policy, detection tech and product roadmaps can converge to preserve the very authenticity that made the app valuable.

AI
View original source →