← Back to stories Teen boy in a digital room interacting with a futuristic touchscreen device wearing protective goggles.
Photo by Vika Glitter on Pexels
凤凰科技 2026-04-19

ChatGPT reportedly adds age‑prediction tool to identify teenage users

What was reported

It has been reported that OpenAI's ChatGPT has rolled out an age‑prediction feature intended to detect teenage users and adjust interactions accordingly. The company says the aim is to provide safer, age‑appropriate responses and to offer parental‑consent or additional safeguards where required — reportedly by inferring likely age from interaction signals rather than relying solely on self‑reported birthdates. Details on the algorithms, accuracy rates, and whether the feature uses biometric inputs have not been independently verified.

Why it matters

Age gating matters because different jurisdictions treat minors differently online. Regulators in the US, EU and elsewhere are tightening rules on child safety, data collection and algorithmic profiling: think COPPA‑style protections in the United States, the EU’s upcoming AI Act, and China’s strict youth‑protection rules enforced across platforms such as WeChat (微信). Will an automated age‑estimation layer help platforms comply — or create new privacy and profiling risks? The trade‑off is stark: better targeted protections versus expanded automated inferences about vulnerable users.

Risks and reactions

Privacy advocates warn that automated age‑estimation can be error‑prone and may misclassify adults as minors or vice versa, with real consequences for access and free expression. There are also concerns about what signals are being used, how long inferred labels are stored, and whether such data could be subject to cross‑border transfer or subpoenas. Tech firms in China and the West have long used layered approaches — real‑name verification, parental controls, and content filters — but adding opaque machine inference raises fresh questions about transparency and accountability.

What comes next

OpenAI will likely face calls for independent audits, clearer disclosure of methods, and opt‑out pathways. Regulators will watch closely: is this a responsible safety measure, or an expedient form of profiling? Either way, the move underscores a broader tension in AI governance — how to protect young users without normalizing invasive inferences. Who decides what “age‑appropriate” means? For now, that remains as much a policy question as a technical one.

AISpace
View original source →