EndoluminalSurgicalVision-IMR / Medical-SSL

[MedIA 2023]The official implementation of MIA paper "Dive into the details of Self-supervised Learning for Medical Image Analysis"
29 stars 3 forks source link

Medical-SSL

[NEWS-20230417]

We have added the configs of 2D pretraining and fine-tuning with EyePACS and DRIVE dataset. Please refer to "configs_2d"

The code of our paper:

Chuyan Zhang, Hao Zheng, Yun Gu, "Dive into Self-Supervised Learning for Medical Image Analysis"

How to run?

To run the benchmark, please refer to the config files in "configs/"

Dependencies

How to perform pretraining?

Step1. Prepare the pretraining dataset

Download the LUNA2016 from https://luna16.grand-challenge.org/download/

Store the LUNA2016 dataset in the path "../../Data/LUNA2016"

Step2. Pre-process the pretraining data for different pretext tasks.

Pre-Process the LUNA2016 dataset by the code in "pre_processing/":

Step3. List the paths to the pre-processed datasets in "datasets_3D/paths.py"

Step4. Pretrain the pretxt tasks.

Find the corresponding config files to different SSL pretext tasks in "configs/", make sure the configs match your training setting:

python configs/luna_xxx_3d_config.py

How to fine-tune?

Step1. Prepare the target dataset

Step2. Pre-process the target dataset

Example: For data processing in NCC task:

python luna_node_extraction.py

Step3. List the paths to the pre-processed datasets in "datasets_3D/paths.py"

Step4. Fine-tune a pretrained model on the target dataset.

Find the corresponding config files to target tasks in "configs/", make sure the configs match your training setting and change the default pretrained_path to your own path:

Example: To fine-tune NCC task:

python luna_ncc_3d_config.py

On going...

We are still working on more implementations of self-supervised methods for medical image. Feel free to contribute!

More?

The full paper can be found here. More details can be found in the supplementary material.