← Back to stories Close-up of gloved hands holding a blood sample in a test tube, ideal for medical and lab contexts.
Photo by www.kaboompics.com on Pexels
凤凰科技 2026-04-06

New AI model can detect multiple neurodegenerative diseases from a blood test

The breakthrough

Researchers in China have unveiled an artificial intelligence model that reportedly identifies multiple neurodegenerative diseases from a single blood sample, offering a potential alternative to costly imaging and invasive spinal taps. It has been reported by ifeng (凤凰网) that the system uses routine blood biomarkers alongside machine‑learning algorithms to distinguish conditions such as Alzheimer’s disease, Parkinson’s disease and other dementias. Could a simple blood draw become the first line of screening for disorders that today often require PET scans or cerebrospinal fluid analysis?

Validation and limits

The team says the model was trained on clinical and molecular data and validated against patient cohorts, achieving strong discrimination between disease types, but independent replication is still required. It has been reported that the study tested the algorithm on multiple clinical groups, yet details on cohort size, demographic diversity and peer review remain limited in the public account. Experts caution that algorithmic performance in controlled datasets often falls short in real‑world, multi‑ethnic clinical practice; regulatory approval will require transparent methods, larger external validations and prospective trials.

Geopolitical and clinical implications

The development arrives as China intensifies investment in AI and biotech, aiming to close gaps with Western diagnostic pipelines. But geopolitical realities — from export controls on advanced chips to tightened international research collaboration — could shape how quickly such tools are adopted or validated abroad. Clinically, if robustly proven, a scalable blood‑AI test could lower costs, speed recruitment into trials and enable earlier interventions. Yet patient privacy, algorithmic bias and the medical consequences of false positives and negatives mean adoption will be cautious. For now, the claim is promising, but more independent evidence is needed before this becomes routine clinical care.

AI
View original source →