XiongchaoChen / DuSFE_CrossRegistration

DuSFE: Dual-Branch Squeeze-Fusion-Excitation Module for Cross-Modality Registration of Cardiac SPECT and CT (MedIA 2023, MICCAI 2022)
MIT License
17 stars 0 forks source link

DuSFE: Dual-Channel Squeeze-Fusion-Excitation co-attention for cross-modality registration of cardiac SPECT and CT (Medical Image Analysis 2023 & MICCAI Travel Award 2022)

Xiongchao Chen, Bo Zhou, Huidong Xie, Xueqi Guo, Jiazhen Zhang, James S. Duncan, Edward J. Miller, Albert J. Sinusas, John A. Onofrey, and Chi Liu

[Paper Link]

image

This repository contains the PyTorch implementation of Dual-Branch Squeeze-Fusion-Excitation (DuSFE) Module for cross-modality SPECT-CT registration.

Citation

If you use this code for your research or project, please cite:

@inproceedings{chen2022dual,
  title={Dual-Branch Squeeze-Fusion-Excitation Module for Cross-Modality Registration of Cardiac SPECT and CT},
  author={Chen, Xiongchao and Zhou, Bo and Xie, Huidong and Guo, Xueqi and Zhang, Jiazhen and Sinusas, Albert J and Onofrey, John A and Liu, Chi},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={46--55},
  year={2022},
  organization={Springer}
}

Environment and Dependencies

Requirements:

where \ Amap_Trans: rotated CT-based attenuation map with a size of H x W x D. \ Amap_CT: aligned CT-based attenuation maps with a size of H x W x D. \ SPECT_NC: reconstructed cardiac SPECT image in a photopeak window with a size of H x W x D. \ SPECT_SC: reconstructed cardiac SPECT image in a scatter window with a size of H x W x D (optional). \ Index_Trans: rigid transformation index with a size of 6 x 1 (3 translational index, 3 rotational index).

To Run the Code

Sample training/testing scripts are provided at the root folder as train_register.sh and test_register.sh.

where \ --experiment_name experiment name for the code, and save all the training results in this under this "experiment_name" folder. \ --model_type: model type used (default convolutional neural networks -- "model_reg"). \ --dataset: dataset type. \ --data_root: the path of the dataset. \ --net_G: neural network model used (default: 'DuRegister_DuSE'). \ --net_filter: num of filters in the densely connected layers of DuSFE (default: 32). \ --lr: learning rate. \ --step_size: num of epoch for learning rate decay .\ --gamma: learning decay rate. \ --n_epochs: num of epochs of training. \ --batch_size: training batch size. \ --n_patch_train: number of training patches extracted from each image volume. \ --patch_size_train: training patch size. \ --n_patch_test: number of testing patches extracted from each image volume. \ --patch_size_test: testing patch size. \ --n_patch_valid: number of validation patches extracted from each image volume. \ --patch_size_valid: validation patch size. \ --test_epochs: number of epochs for periodic validation. \ --save_epochs: number of epochs for saving trained model. \ --gpu_ids: GPU configuration.

Data Availability

The original dataset in this study is available from the corresponding author (chi.liu@yale.edu) upon reasonable request and approval of Yale University.

Contact

If you have any questions, please file an issue or directly contact the author:

Xiongchao Chen: xiongchao.chen@yale.edu, cxiongchao9587@gmail.com