Meta’s Fundamental AI Research team has released TRIBE v2, a foundation model that predicts how the human brain responds to images, audio and language. Trained on over 700 subjects and more than 1,000 hours of fMRI data, it’s now freely available to academic researchers worldwide.
Meta’s AI research division has released what may be the most powerful open-source tool ever built for computational neuroscience. TRIBE v2 — short for Transformer for In-silico Brain Experiments — is a foundation model that predicts, with high resolution, how the human brain responds to virtually any visual, auditory, or linguistic input. The model, the underlying code and an interactive demo are all available now under a non-commercial open license.
The announcement comes from Meta’s Fundamental AI Research (FAIR) team, which built TRIBE v2 on the back of a predecessor model that won the Algonauts 2025 Challenge, an international competition in brain encoding. That earlier version was trained on low-resolution fMRI recordings from just four individuals. TRIBE v2 represents a dramatic scale-up: it draws on data from more than 700 healthy volunteers who were exposed to a wide range of media, including images, podcasts, videos and text. The resulting dataset totals over 1,000 hours of fMRI recordings — an extraordinary resource by any measure in a field where scanning a single person for an hour can cost thousands of dollars.
What Makes TRIBE v2 Different
Traditional brain encoding models were narrow by design. A model trained to map neural responses to faces, for example, couldn’t tell you much about how the brain handles speech or music. The field has largely consisted of small, highly specialized systems built for specific experimental conditions. TRIBE v2 breaks from that pattern by handling vision, audio and language in a single unified architecture — what researchers call tri-modal coverage.
The practical gains are significant. TRIBE v2 can generate zero-shot predictions for new subjects, new languages and new tasks it was never explicitly trained on. It also replicates decades of established neuroscience findings through pure digital simulation — identifying well-known brain regions like the fusiform face area, which processes faces, and Broca’s area, which is involved in language, without ever running a physical experiment. The model delivers a roughly 70-times improvement in fMRI resolution compared to similar approaches and consistently outperforms conventional linear encoding models.
Competing efforts exist. An ICLR 2026 submission explores brain-informed language model training using fMRI, and the Brain Encoding Response Generator (BERG) offers pre-trained encoding models with a Python interface. But none of those combine TRIBE v2’s subject breadth, multimodal architecture, resolution gains, and fully open release in a single package. At the Algonauts 2025 challenge itself, the leaderboard showed competitive submissions — one team posted an average Pearson correlation of 0.2125 and another reached 0.2117 — but TRIBE v2 scales the underlying approach far beyond what the challenge dataset allowed.
What This Means for Students and Early-Career Researchers
The cost and logistics of fMRI research have historically confined serious neuroscience work to well-funded labs at major research universities. A single scanning session requires specialized equipment, trained technicians, ethics approvals and months of data processing. TRIBE v2 collapses that barrier. Researchers can now run thousands of virtual experiments — testing how the brain might respond to a specific stimulus, or probing where neural signaling could break down in a neurological disorder — entirely on a computer.
For students in neuroscience, cognitive science, AI, or human-computer interaction, this opens up a new class of research questions that were previously impractical at the dissertation or undergraduate thesis level. The model weights and code are hosted on GitHub at facebookresearch/tribev2, and Meta has included a Colab notebook to get started with nothing more than a Python environment. An interactive demo is also available through Meta AI’s website.
Beyond academic research, TRIBE v2 has direct relevance to applied fields. Brain-computer interface development, neurological disorder diagnostics — including conditions like aphasia and sensory processing disorders — and AI system design informed by human cognition are all areas where this kind of model could accelerate progress. Those fields are also among the faster-growing segments of medtech and neurotech hiring. For students considering a pivot into computational neuroscience or AI-for-health, TRIBE v2 is now a flagship open-source project to build on or contribute to.
The Bigger Picture
Meta frames TRIBE v2 as part of a longer effort to use neuroscience to improve AI systems — not just to advance neuroscience for its own sake. The idea is that a more precise computational model of how humans process the world can directly inform how AI systems are designed and evaluated. Whether that feedback loop produces meaningful AI improvements remains to be seen, but the research release itself is immediately useful regardless of that longer-term ambition.
The CC BY-NC license means academic and nonprofit researchers can use the model freely; commercial applications require a separate arrangement. That restriction keeps it accessible to the students and researchers most likely to push the science forward.
Source: Meta AI
Additional research sources
- https://neurosciencenews.com/meta-tribe-ai-brain-decoding-30398/
- https://ai.meta.com/research/publications/a-foundation-model-of-vision-audition-and-language-for-in-silico-neuroscience/
- https://blog.pebblous.ai/story/meta-tribe-v2-brain-story-pb/en/
- https://ai.meta.com/blog/tribe-v2-brain-predictive-foundation-model/
- https://www.marktechpost.com/2026/03/26/meta-releases-tribe-v2-a-brain-encoding-model-that-predicts-fmri-responses-across-video-audio-and-text-stimuli/
- https://www.linkedin.com/posts/aiatmeta_introducing-tribe-v2-a-predictive-foundation-activity-7442919649895239683-q6ez
