Luffy03 / VoCo

[CVPR 2024] VoCo: A Simple-yet-Effective Volume Contrastive Learning Framework for 3D Medical Image Analysis
Apache License 2.0
149 stars 12 forks source link

Regarding the reproduction of the paper #15

Open blofn opened 5 months ago

blofn commented 5 months ago

I noticed that the structure in your code is inconsistent with the structure in the paper. In your code, VoCoHead uses a student-teacher model, which is not mentioned in your paper. So, how is the method in your paper implemented?

Luffy03 commented 5 months ago

Hi, many thanks for your attention to our work! Yes, as you say, the current version has been updated with the teacher-student models, which are advanced from our previous conference paper. The details will be introduced in our extension paper.

jsadu826 commented 4 months ago

Hi, could you please upload the code of the original model used in the CVPR 2024 paper?

Hi, many thanks for your attention to our work! Yes, as you say, the current version has been updated with the teacher-student models, which are advanced from our previous conference paper. The details will be introduced in our extension paper.

Luffy03 commented 4 months ago

Ok, no problem! I will check it again and upload it as soon as possible.

Luffy03 commented 4 months ago

I am so sorry for my late reply since I was on a travel. Here is the code link of the old version https://www.dropbox.com/scl/fi/ounrqw35msacn2tynm87s/voco_headv1_old.py?rlkey=ykgu5xcdt6k07plonarp8bzw7&st=zxlqifai&dl=0

jsadu826 commented 2 months ago

I am so sorry for my late reply since I was on a travel. Here is the code link of the old version https://www.dropbox.com/scl/fi/ounrqw35msacn2tynm87s/voco_headv1_old.py?rlkey=ykgu5xcdt6k07plonarp8bzw7&st=zxlqifai&dl=0

Hello, when I pretrained VoCo on BTCV, TCIA Covid19, and LUNA16 using your old versioin of voco_head for about 60000 steps, I found the training loss not decreasing.

training_loss part_of_training_log.txt

The implementation details followed your CVPR paper, using 1 V100 GPU:

Luffy03 commented 2 months ago

Weired. It seems the training loss is not consistent with our provided training log https://www.dropbox.com/scl/fi/rmqy9n2gio5tptbhlt239/20240115_232208.txt?rlkey=0jmnpz3n77bb1b9r9wt9aqkrv&dl=0. This training log is produced by the old version. Then, would you please try to change lr as 1e-4?https://github.com/Luffy03/VoCo/blob/4c3fecc4b5359a61b0374b1a9ba9e4fbdaa65b97/voco_train.py#L148

Luffy03 commented 2 months ago

Maybe it is caused by different GPU versions? I have tried H800 and A800, but I have not yet tried V100.

jsadu826 commented 2 months ago

Thank you! I'll try lr=1e-4.

Btw, I did some code modification to adapt to the current github repo version. I'd be glad if you could help me double check.

image

image

Luffy03 commented 2 months ago

Seems no problem currently.

Devil-Ideal commented 2 months ago

I am so sorry for my late reply since I was on a travel. Here is the code link of the old version https://www.dropbox.com/scl/fi/ounrqw35msacn2tynm87s/voco_headv1_old.py?rlkey=ykgu5xcdt6k07plonarp8bzw7&st=zxlqifai&dl=0

Hi ! How many GPUs are needed for the old model (in CVPR2024) during training, what is the GPU memory of a single card and what is batch size?

Luffy03 commented 2 months ago

I am so sorry for my late reply since I was on a travel. Here is the code link of the old version https://www.dropbox.com/scl/fi/ounrqw35msacn2tynm87s/voco_headv1_old.py?rlkey=ykgu5xcdt6k07plonarp8bzw7&st=zxlqifai&dl=0

Hi ! How many GPUs are needed for the old model (in CVPR2024) during training, what is the GPU memory of a single card and what is batch size?

For the CVPR version we also use H800 (80G memory) and the batch size is 4.

jsadu826 commented 2 months ago

Here is an update on reproduction.

My training log and modification to the official code (based on commit f70606b): modified_files_and_log.zip

Loss curves:

image

24 @FengheTan9

Luffy03 commented 2 months ago

Thank your very much

Luffy03 commented 1 month ago

Dear researchers, our work is now available at Large-Scale-Medical, if you are still interested in this topic. Thank you very much for your attention to our work, it does encourage me a lot!