← Back to stories Close-up of wooden lobster traps with colorful ropes and nets at a harbor.
Photo by Erik Mclean on Pexels
凤凰科技 2026-03-09

Xiaohongshu (小红书) open-sources FireRed-Image-Edit-1.1, pushing faster, more consistent AI photo edits

Key development

Xiaohongshu (小红书)’s Super Intelligence team has released FireRed-Image-Edit-1.1, an upgraded image-editing model arriving less than a month after version 1.0. It has been reported that the model delivers edits in about 4.5 seconds and runs on roughly 30GB of GPU memory. The team has open-sourced the code, technical report, model weights, and its training–distillation–inference framework, signaling a bid to rally developers around its stack.

What’s new

According to the team, 1.1 brings sizable gains in identity-consistent editing (preserving a subject’s look across changes), multi-element compositing, portrait beauty/makeup, and font-style reference. It also supports end-to-end training and deployment optimizations—features aimed at turning research demos into production tools. How much better is it in the wild? The open release should make that testable quickly.

Why it matters

Xiaohongshu is a leading Chinese social-commerce platform often likened to a blend of Instagram and Pinterest, where AI-powered visual tools can shape creator workflows, advertising, and content moderation. The move underscores a broader shift among Chinese tech firms—such as Alibaba (阿里巴巴) and Baidu (百度)—to open-source parts of their AI stacks to spur adoption and ecosystem buy-in. It also arrives as U.S. export controls continue to limit China’s access to top-tier AI chips; a model that reportedly runs within ~30GB of VRAM could be attractive for teams relying on more accessible GPUs.

The fine print

The announcement was surfaced via Phoenix New Media (凤凰网)’s platform, and some performance details are reportedly sourced from user-posted material; independent verification is pending. Still, with code and weights available, the developer community will likely move fast to validate claims—and to probe whether rapid iteration from 1.0 to 1.1 translates into real-world gains.

AISpace
View original source →