BrainSymphony Introduces Lightweight Multimodal Neuroscience Foundation Model

An arXiv preprint (v2 posted Feb 12, 2026) introduces BrainSymphony, a lightweight multimodal foundation model that integrates fMRI time series and diffusion-derived structural connectivity for unimodal or multimodal training and deployment. The architecture combines parallel spatial and temporal transformers, a Perceiver bottleneck, and a signed graph transformer with adaptive fusion, outperforming larger models on prediction, classification, and unsupervised network discovery benchmarks.
Scoring Rationale
Strong methodological novelty and clear benchmark gains, tempered by preprint status and single-source validation constraints.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.
Sources
- Read Original[2506.18314] BrainSymphony: A parameter-efficient multimodal foundation model for brain dynamics with limited dataarxiv.org

