New AI Model Reads MRIs to Predict Brain Age, Cancer

A new AI model called BrainIAC can learn from unlabeled brain MRI scans and tackle many different medical tasks, from estimating brain age to predicting brain cancer survival. Researchers say the tool could help bring more powerful, data-efficient AI into everyday clinical care.

An artificial intelligence model that can read brain scans and tackle many different medical questions — without needing heavily labeled training data — could help bring advanced AI tools closer to everyday clinical use.

Researchers at Mass General Brigham have developed BrainIAC, a foundation model for brain MRI analysis that can estimate brain age, predict dementia risk, detect genetic mutations in brain tumors and forecast brain cancer survival, among other tasks. The results are published in the journal Nature Neuroscience.

BrainIAC was designed to solve a core problem in medical imaging AI. Most existing models are built for one narrow task — such as spotting a specific type of tumor — and they usually require thousands of scans that have been carefully labeled by experts. Creating those labeled datasets is slow, expensive and sometimes impossible, especially for rare diseases.

On top of that, brain MRI scans can look very different from one hospital to another. Imaging protocols vary between neurology and oncology clinics, and even similar scans can differ based on the machine, settings and patient population. Those differences can confuse AI systems and limit their usefulness outside the lab where they were trained.

To get around these hurdles, the team built BrainIAC as a general-purpose “adaptive core” for brain imaging. Instead of learning from labeled examples of each disease, the model uses a technique called self-supervised learning. In self-supervised learning, the AI teaches itself patterns and structures hidden in the data — in this case, unlabeled brain MRIs — and builds a rich internal representation of what healthy and diseased brains can look like.

Once that core understanding is in place, BrainIAC can be adapted to many different clinical tasks with relatively small amounts of labeled data.

The researchers first pretrained BrainIAC on multiple large brain MRI datasets, allowing it to learn from a wide range of scan types and patient populations. They then tested the model on 48,965 diverse brain MRI scans across seven different tasks that varied in difficulty and clinical focus.

Those tasks ranged from straightforward jobs, such as recognizing what type of MRI scan was being viewed, to highly complex challenges, such as identifying mutation types in brain tumors and predicting how long a patient with brain cancer might survive.

Across these tests, BrainIAC was able to generalize what it had learned from both healthy and abnormal images. It successfully transferred that knowledge to new problems and outperformed three more conventional AI models that had been trained specifically for individual tasks.

The model’s strengths were especially clear when data were limited or the problem was complex. In situations where only a small number of labeled scans were available, or where the task required subtle pattern recognition — such as predicting outcomes in brain cancer — BrainIAC’s broad training and flexible design gave it an edge.

That kind of data efficiency is crucial in medicine, where large, perfectly annotated datasets are rare. Many hospitals and research groups have archives of brain MRIs, but only a fraction of those scans come with detailed labels about diagnoses, genetic mutations or long-term outcomes. A model that can learn from unlabeled images and still perform well with minimal extra training could unlock much more of that hidden information.

This capability could speed up both research and clinical decision-making, according to corresponding author Benjamin Kann, of the Artificial Intelligence in Medicine (AIM) Program at Mass General Brigham.

“BrainIAC has the potential to accelerate biomarker discovery, enhance diagnostic tools and speed the adoption of AI in clinical practice,” Kann said in a news release. “Integrating BrainIAC into imaging protocols could help clinicians better personalize and improve patient care.”

In practical terms, that might mean using BrainIAC to flag patients whose brains appear older than expected for their age, which can be a warning sign for neurodegenerative diseases. It could also help oncologists quickly identify tumor mutations that guide treatment choices, or provide more accurate survival estimates to support care planning and clinical trial design.

The study focused on MRI, one of the most common and versatile tools for looking inside the brain. Because MRI does not use radiation and can capture detailed images of brain structure and, in some cases, function, it is central to diagnosing conditions from multiple sclerosis and stroke to brain tumors and dementia.

By creating a foundation model tailored to brain MRI, the team hopes to give clinicians and researchers a flexible tool they can adapt to new questions as they arise, rather than building a new AI system from scratch each time.

The authors note that more work is needed before BrainIAC can be widely deployed. Future research will test the framework on additional types of brain imaging and larger, more varied datasets to make sure it performs reliably across different institutions and patient groups.

Still, the study points toward a future where a single, adaptable AI engine could sit behind many different tools in the radiology suite, quietly analyzing scans, surfacing hidden patterns and helping clinicians make more informed decisions.

The Nature Neuroscience paper underscores how foundation models — which have transformed fields like natural language processing — are beginning to reshape medical imaging as well.

Source: Mass General Brigham