Skip to content

xmed-lab/ZEBRA

Repository files navigation

[NeurIPS 2025] Zebra: Towards Zero-Shot Cross-Subject Generalization for Universal Brain Visual Decoding

arXiv

🌟 If you find our project useful, please consider giving us a star!

📌 Overview

model

Architecture of the Zebra framework

Recent advances in neural decoding have enabled the reconstruction of visual experiences from brain activity, positioning fMRI-to-image reconstruction as a promising bridge between neuroscience and computer vision. However, current methods predominantly rely on subject-specific models or require subject-specific fine-tuning, limiting their scalability and real-world applicability. In this work, we introduce Zebra, the first zero-shot brain visual decoding framework that eliminates the need for subject-specific adaptation. Zebra is built on the key insight that fMRI representations can be decomposed into subject-related and semantic-related components. By leveraging adversarial training, our method explicitly disentangles these components to isolate subject-invariant, semantic-specific representations. This disentanglement allows Zebra to generalize to unseen subjects without any additional fMRI data or retraining.

model

Comparison between previous methods and Zebra

📣 Latest Updates

🟡 2025/11    Code released!

🟡 2025/09    ZEBRA is accepted by NeurIPS-2025!

🛠️ Installation & Setup

🖥️ Environment Setup

conda create -n zebra python==3.10
conda activate zebra
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

📊 Data and Weights Preparation

Download the pre-processed dataset:

python download_dataset.py
unzip fmri_npy.zip
unzip nsddata_stimuli.zip

Download the pretrained weights of the fMRI encoder from here. Then put them under pretrained_weights/fMRI2fMRI_UKB/.

Download the pretrained weights for recon via:

cd pretrained_weights 
wget -O unclip6_epoch0_step110000.ckpt -c https://huggingface.co/datasets/pscotti/mindeyev2/resolve/main/unclip6_epoch0_step110000.ckpt\?download\=true
wget -O convnext_xlarge_alpha0.75_fullckpt.pth -c https://huggingface.co/datasets/pscotti/mindeyev2/resolve/main/convnext_xlarge_alpha0.75_fullckpt.pth\?download\=true
wget -O sd_image_var_autoenc.pth https://huggingface.co/datasets/pscotti/mindeyev2/resolve/main/sd_image_var_autoenc.pth\?download\=true
wget -O zavychromaxl_v30.safetensors https://huggingface.co/datasets/pscotti/mindeyev2/resolve/main/zavychromaxl_v30.safetensors?download=true

🚀 Quick Start

This codebase allows train, test, and evaluate using one single bash file.

bash run.sh 0 zebra 1234567 1 1 last

Parameters:

$1: use which gpu to train

$2: train file postfix, e.g, zebra

$3: run which stage: 1234567 for the whole process, 34567 for test & eval only

  • 1: train brain model
  • 2: train prior network
  • 3: image reconstruction
  • 4: evaluation with all metrics
  • 5: caption the keyframes with BLIP-2
  • 6: enhanced image reconstruction
  • 7: final evaluation with all metrics

$4: unseen subject (use others for training): [1,2,5,7]

$5: use which ckpt for inference: ['last', 'prior']


Note that for convenience of debugging, use_wandb is set to False be default.

If you would like to use wandb, first run wandb login and set the use_wandb to True in train_zebra.py.

📚 Citation

If you find this project useful, please consider citing:

@article{wang2025zebra,
  title={ZEBRA: Towards Zero-Shot Cross-Subject Generalization for Universal Brain Visual Decoding},
  author={Wang, Haonan and Lu, Jingyu and Li, Hongrui and Li, Xiaomeng},
  journal={arXiv preprint arXiv:2510.27128},
  year={2025}
}

About

[NeurIPS 2025] Zebra: Towards Zero-Shot Cross-Subject Generalization for Universal Brain Visual Decoding

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors