yeerwen / MedCoSS

CVPR 2024 (Highlight)
Other
90 stars 2 forks source link

MedCoSS

This is the official Pytorch implementation of our CVPR 2024 paper (Highlight) "Continual Self-supervised Learning: Towards Universal Multi-modal Medical Data Representation Learning".

MedCoSS illustration

Requirements

CUDA 11.5
Python 3.8
Pytorch 1.11.0
CuDNN 8.3.2.44

Data Preparation

Pre-processing

Pre-training

Pre-trained Model

Fine-tuning

To do

Citation

If this code is helpful for your study, please cite:

@article{ye2024medcoss,
  title={Continual Self-supervised Learning: Towards Universal Multi-modal Medical Data Representation Learning},
  author={Ye, Yiwen and Xie, Yutong and Zhang, Jianpeng and Chen, Ziyang and Wu, Qi and Xia, Yong},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  pages={11114-11124},
  year={2024},
}

Acknowledgements

The whole framework is based on MAE, Uni-Perceiver, and MGCA.

Contact

Yiwen Ye (ywye@mail.nwpu.edu.cn)