{"id":36222,"date":"2026-04-06T09:30:00","date_gmt":"2026-04-06T09:30:00","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=36222"},"modified":"2026-04-29T21:30:21","modified_gmt":"2026-04-29T21:30:21","slug":"meta-ai-releases-tribe-v2-brain-activity-prediction-model","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/meta-ai-releases-tribe-v2-brain-activity-prediction-model\/","title":{"rendered":"Meta AI Releases TRIBE v2 Brain Activity Prediction Model"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">Meta&#8217;s Fundamental AI Research team has released TRIBE v2, a foundation model that predicts how the human brain responds to images, audio and language. Trained on over 700 subjects and more than 1,000 hours of fMRI data, it&#8217;s now freely available to academic researchers worldwide.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">Peter Corrigan<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Meta&#8217;s AI research division has released what may be the most powerful open-source tool ever built for computational neuroscience. TRIBE v2 \u2014 short for Transformer for In-silico Brain Experiments \u2014 is a foundation model that predicts, with high resolution, how the human brain responds to virtually any visual, auditory, or linguistic input. The model, the underlying code and an interactive demo are all available now under a non-commercial open license.<\/p>\n\n\n\n<p>The announcement comes from Meta&#8217;s Fundamental AI Research (FAIR) team, which built TRIBE v2 on the back of a predecessor model that won the Algonauts 2025 Challenge, an international competition in brain encoding. That earlier version was trained on low-resolution fMRI recordings from just four individuals. TRIBE v2 represents a dramatic scale-up: it draws on data from more than 700 healthy volunteers who were exposed to a wide range of media, including images, podcasts, videos and text. The resulting dataset totals over 1,000 hours of fMRI recordings \u2014 an extraordinary resource by any measure in a field where scanning a single person for an hour can cost thousands of dollars.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What Makes TRIBE v2 Different<\/h2>\n\n\n\n<p>Traditional brain encoding models were narrow by design. A model trained to map neural responses to faces, for example, couldn&#8217;t tell you much about how the brain handles speech or music. The field has largely consisted of small, highly specialized systems built for specific experimental conditions. TRIBE v2 breaks from that pattern by handling vision, audio and language in a single unified architecture \u2014 what researchers call tri-modal coverage.<\/p>\n\n\n\n<p>The practical gains are significant. TRIBE v2 can generate zero-shot predictions for new subjects, new languages and new tasks it was never explicitly trained on. It also replicates decades of established neuroscience findings through pure digital simulation \u2014 identifying well-known brain regions like the fusiform face area, which processes faces, and Broca&#8217;s area, which is involved in language, without ever running a physical experiment. The model delivers a roughly 70-times improvement in fMRI resolution compared to similar approaches and consistently outperforms conventional linear encoding models.<\/p>\n\n\n\n<p>Competing efforts exist. An ICLR 2026 submission explores brain-informed language model training using fMRI, and the Brain Encoding Response Generator (BERG) offers pre-trained encoding models with a Python interface. But none of those combine TRIBE v2&#8217;s subject breadth, multimodal architecture, resolution gains, and fully open release in a single package. At the Algonauts 2025 challenge itself, the leaderboard showed competitive submissions \u2014 one team posted an average Pearson correlation of 0.2125 and another reached 0.2117 \u2014 but TRIBE v2 scales the underlying approach far beyond what the challenge dataset allowed.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What This Means for Students and Early-Career Researchers<\/h2>\n\n\n\n<p>The cost and logistics of fMRI research have historically confined serious neuroscience work to well-funded labs at major research universities. A single scanning session requires specialized equipment, trained technicians, ethics approvals and months of data processing. TRIBE v2 collapses that barrier. Researchers can now run thousands of virtual experiments \u2014 testing how the brain might respond to a specific stimulus, or probing where neural signaling could break down in a neurological disorder \u2014 entirely on a computer.<\/p>\n\n\n\n<p>For students in neuroscience, cognitive science, AI, or human-computer interaction, this opens up a new class of research questions that were previously impractical at the dissertation or undergraduate thesis level. The model weights and code are hosted on GitHub at <a href=\"https:\/\/github.com\/facebookresearch\/tribev2\" target=\"_blank\" rel=\"noopener\">facebookresearch\/tribev2<\/a>, and Meta has included a Colab notebook to get started with nothing more than a Python environment. An interactive demo is also available through Meta AI&#8217;s website.<\/p>\n\n\n\n<p>Beyond academic research, TRIBE v2 has direct relevance to applied fields. Brain-computer interface development, neurological disorder diagnostics \u2014 including conditions like aphasia and sensory processing disorders \u2014 and AI system design informed by human cognition are all areas where this kind of model could accelerate progress. Those fields are also among the faster-growing segments of medtech and neurotech hiring. For students considering a pivot into computational neuroscience or AI-for-health, TRIBE v2 is now a flagship open-source project to build on or contribute to.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Bigger Picture<\/h2>\n\n\n\n<p>Meta frames TRIBE v2 as part of a longer effort to use neuroscience to improve AI systems \u2014 not just to advance neuroscience for its own sake. The idea is that a more precise computational model of how humans process the world can directly inform how AI systems are designed and evaluated. Whether that feedback loop produces meaningful AI improvements remains to be seen, but the research release itself is immediately useful regardless of that longer-term ambition.<\/p>\n\n\n\n<p>The CC BY-NC license means academic and nonprofit researchers can use the model freely; commercial applications require a separate arrangement. That restriction keeps it accessible to the students and researchers most likely to push the science forward.<\/p>\n\n\n\n<p class=\"source-attribution\"><strong>Source:<\/strong> <a href=\"https:\/\/ai.meta.com\/blog\/tribe-v2-brain-predictive-foundation-model\/\" target=\"_blank\" rel=\"nofollow noopener\">Meta AI<\/a><\/p>\n\n\n\n<details class=\"research-citations\">\n<summary>Additional research sources<\/summary>\n<ul>\n<li><a href=\"https:\/\/neurosciencenews.com\/meta-tribe-ai-brain-decoding-30398\/\" target=\"_blank\" rel=\"nofollow noopener\">https:\/\/neurosciencenews.com\/meta-tribe-ai-brain-decoding-30398\/<\/a><\/li>\n<li><a href=\"https:\/\/ai.meta.com\/research\/publications\/a-foundation-model-of-vision-audition-and-language-for-in-silico-neuroscience\/\" target=\"_blank\" rel=\"nofollow noopener\">https:\/\/ai.meta.com\/research\/publications\/a-foundation-model-of-vision-audition-and-language-for-in-silico-neuroscience\/<\/a><\/li>\n<li><a href=\"https:\/\/blog.pebblous.ai\/story\/meta-tribe-v2-brain-story-pb\/en\/\" target=\"_blank\" rel=\"nofollow noopener\">https:\/\/blog.pebblous.ai\/story\/meta-tribe-v2-brain-story-pb\/en\/<\/a><\/li>\n<li><a href=\"https:\/\/ai.meta.com\/blog\/tribe-v2-brain-predictive-foundation-model\/\" target=\"_blank\" rel=\"nofollow noopener\">https:\/\/ai.meta.com\/blog\/tribe-v2-brain-predictive-foundation-model\/<\/a><\/li>\n<li><a href=\"https:\/\/www.marktechpost.com\/2026\/03\/26\/meta-releases-tribe-v2-a-brain-encoding-model-that-predicts-fmri-responses-across-video-audio-and-text-stimuli\/\" target=\"_blank\" rel=\"nofollow noopener\">https:\/\/www.marktechpost.com\/2026\/03\/26\/meta-releases-tribe-v2-a-brain-encoding-model-that-predicts-fmri-responses-across-video-audio-and-text-stimuli\/<\/a><\/li>\n<li><a href=\"https:\/\/www.linkedin.com\/posts\/aiatmeta_introducing-tribe-v2-a-predictive-foundation-activity-7442919649895239683-q6ez\" target=\"_blank\" rel=\"nofollow noopener\">https:\/\/www.linkedin.com\/posts\/aiatmeta_introducing-tribe-v2-a-predictive-foundation-activity-7442919649895239683-q6ez<\/a><\/li>\n<\/ul>\n<\/details>\n","protected":false},"excerpt":{"rendered":"<p>Meta&#8217;s Fundamental AI Research team has released TRIBE v2, a foundation model that predicts how the human brain responds to images, audio and language. Trained on over 700 subjects and more than 1,000 hours of fMRI data, it&#8217;s now freely available to academic researchers worldwide.<\/p>\n","protected":false},"author":6,"featured_media":36221,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[761,762,763,765,764,760,748],"class_list":["post-36222","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","tag-brain-modeling","tag-fmri","tag-foundation-models","tag-medarc","tag-meta-ai","tag-neuroscience","tag-open-source"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":["https:\/\/www.tun.com\/home\/wp-content\/uploads\/2026\/04\/meta-ai-releases-tribe-v2-brain-activity-prediction-model.png",1792,1024,false],"thumbnail":["https:\/\/www.tun.com\/home\/wp-content\/uploads\/2026\/04\/meta-ai-releases-tribe-v2-brain-activity-prediction-model-150x150.png",150,150,true],"medium":["https:\/\/www.tun.com\/home\/wp-content\/uploads\/2026\/04\/meta-ai-releases-tribe-v2-brain-activity-prediction-model-300x171.png",300,171,true],"medium_large":["https:\/\/www.tun.com\/home\/wp-content\/uploads\/2026\/04\/meta-ai-releases-tribe-v2-brain-activity-prediction-model-768x439.png",768,439,true],"large":["https:\/\/www.tun.com\/home\/wp-content\/uploads\/2026\/04\/meta-ai-releases-tribe-v2-brain-activity-prediction-model-1024x585.png",1024,585,true],"1536x1536":["https:\/\/www.tun.com\/home\/wp-content\/uploads\/2026\/04\/meta-ai-releases-tribe-v2-brain-activity-prediction-model-1536x878.png",1536,878,true],"2048x2048":["https:\/\/www.tun.com\/home\/wp-content\/uploads\/2026\/04\/meta-ai-releases-tribe-v2-brain-activity-prediction-model.png",1792,1024,false]},"uagb_author_info":{"display_name":"Peter Corrigan","author_link":"https:\/\/www.tun.com\/home\/author\/peter-corrigan\/"},"uagb_comment_info":0,"uagb_excerpt":"Meta's Fundamental AI Research team has released TRIBE v2, a foundation model that predicts how the human brain responds to images, audio and language. Trained on over 700 subjects and more than 1,000 hours of fMRI data, it's now freely available to academic researchers worldwide.","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/36222","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=36222"}],"version-history":[{"count":6,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/36222\/revisions"}],"predecessor-version":[{"id":36304,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/36222\/revisions\/36304"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media\/36221"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=36222"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=36222"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=36222"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}