Official Pytorch Implementation of Paper - π DiaMond: Dementia Diagnosis with Multi-Modal Vision Transformers Using MRI and PET - Accepted by WACV 2025
conda env create -n diamond --file requirements.yaml
conda activate diamond
We used data from Alzheimer's Disease Neuroimaging Initiative (ADNI) and Japanese Alzheimer's Disease Neuroimaging Initiative (J-ADNI). Since we are not allowed to share our data, you would need to process the data yourself. Data for training, validation, and testing should be stored in separate HDF5 files, using the following hierarchical format:
MRI/T1
, containing the T1-weighted 3D MRI data.PET/FDG
, containing the 3D FDG PET data.DX
containing the diagnosis labels: CN
, Dementia/AD
, FTD
, or MCI
, if available.RID
with the patient ID, if available.The package uses PyTorch. To train and test DiaMond, execute the src/train.py
script.
The configuration file of the command arguments is stored in config/config.yaml
.
The essential command line arguments are:
--dataset_path
: Path to HDF5 files containing either train, validation, or test data splits.--img_size
: Size of the input scan.--test
: True for model evaluation.After specifying the config file, simply start training/evaluation by:
python src/train.py
For any questions, please contact: Yitong Li (yi_tong.li@tum.de)
If you find this repository useful, please consider giving a star π and citing the paper:
@inproceedings{li2024diamond,
title={DiaMond: Dementia Diagnosis with Multi-Modal Vision Transformers Using MRI and PET},
author={Li, Yitong and Ghahremani, Morteza and Wally, Youssef and Wachinger, Christian},
eprint={2410.23219},
archivePrefix={arXiv},
primaryClass={cs.CV},
year={2024},
url={https://arxiv.org/abs/2410.23219},
}
WACV 2025 proceedings:
Coming soon