Who is feeding answers into base-model selection? Beware of marketing contamination
The problem in plain terms
Retail investors in China are increasingly using large AI models as a first-stop adviser for investment decisions. Short, natural-language prompts such as “Is it time for active equity or index funds?” now yield ready-made recommendations. But some users say the answers feel less like neutral analysis and more like polished ads — the same non‑leading products and near‑identical wording pop up across different platforms. Reportedly, this pattern comes not from superior performance data but from coordinated content that has been pre‑positioned to influence model outputs.
How marketing gets into the model
It has been reported that a growing number of marketing and finance‑tech service firms are selling “AI positioning” packages to fund houses: multi‑channel content seeding, unified messaging across corporate sites and social media, and paid placements designed to create high‑frequency signals that base models can pick up. Investors themselves share “AI fund‑selection prompts” on platforms such as Snowball (雪球), and on short‑video and social feeds like Douyin (抖音) and WeChat (微信), accelerating the loop between seeded content and model answers. ETFs, with standardized tags and clear themes, are especially vulnerable — easier for models to classify, and therefore easier to push into recommendation outputs via repeated signal amplification.
Why this matters, and what comes next
This is more than a marketing quirk. If the first answer an investor sees is shaped by who can most effectively “feed” a model, decision-making shifts from verifiable metrics — returns, drawdowns, holdings and risk profiles — to content engineering and budget. Who benefits? Who decides what counts as neutral information? The issue intersects with broader governance and geopolitical concerns: China’s domestic AI ecosystem is growing amid export controls and technology competition, so how models distinguish neutral public filings from branded content will shape both market outcomes and regulatory scrutiny. Platforms and regulators will need clearer rules — better machine‑readable disclosures from fund managers, stronger provenance signals in training and retrieval data, and model design that differentiates marketing from independently verifiable information — if AI is to help investors rather than quietly sell them a story.
