New arXiv study maps how people anthropomorphize — and trust — large language models
Researchers quantify anthropomorphism across thousands of interactions
A new preprint on arXiv (arXiv:2604.15316) examines how people attribute human-like minds and emotions to large language models (LLMs) and how those attributions shape trust. The study analyzed more than 2,000 human–LLM interactions collected from 115 participants to “map the dimensions” users invoke when they anthropomorphize conversational models. Reportedly, the authors identify recurring axes — such as perceived agency, emotionality and competence — that people lean on when judging whether to believe or rely on a model.
Why this matters now
LLMs are moving out of research labs and into everyday products. Designers, companies and regulators want to know: when does a conversational agent become “someone” in users’ minds, and what risk follows from misplaced trust? Short answer: the social cues these models emit can create familiarity and confidence very quickly. That has design implications — disclosure, interface cues and guardrails — and regulatory ones too, especially where misattributed intent could amplify harm.
Context for global and Chinese readers
This work is relevant to the global AI ecosystem and to China’s rapidly evolving LLM market, where companies such as Baidu (百度), Alibaba (阿里巴巴) and Tencent (腾讯) are racing to commercialize chat and assistant products. It has been reported that Chinese platforms are rolling out increasingly human‑facing interfaces; in that competitive and highly regulated environment, understanding anthropomorphism is not just academic but commercially urgent. Geopolitical tensions and export controls on advanced chips and AI tools add another layer: national strategies for trustworthy AI increasingly shape which models are developed, deployed and audited.
Takeaway: design, disclosure and trust
The paper underscores a simple but consequential point: people will treat LLMs like social actors unless designers intentionally prevent it. How will companies balance conversational power with clear limits? How will policymakers require transparency so users don’t over‑trust a system that only simulates understanding? The arXiv preprint is a timely prompt for those questions, and it sets a measurable foundation for further work on ethics, regulation and product design.
