UCSC-VLAA / SwinMM

[MICCAI 2023] This repository includes the official implementation our paper "SwinMM: Masked Multi-view with Swin Transformers for 3D Medical Image Segmentation"
100 stars 6 forks source link

Semi-supervised training code #5

Closed thangngoc89 closed 1 year ago

thangngoc89 commented 1 year ago

Hi, thank you for releasing your works. I saw Semi-supervised is mentioned in the paper and figure but I didn't see any actual numbers or training code for it. Can you please elaborate?

HUANGLIZI commented 1 year ago

Thanks for your attention on our work. As for the semi-supervised results, you can check Table 5 in the paper if you like. You can utilize the JSON files we provide at https://github.com/UCSC-VLAA/SwinMM/blob/master/WORD/dataset/ for the semi-supervised fine-tuning.

thangngoc89 commented 1 year ago

@HUANGLIZI Thank you for answering. I have checked the code path you provided and it looked like you still uses the label while doing "unsupervised" training given by the fact that you share train transform with supervised mode. Which has RandCropByPosNegLabeld transformation. I'm trying to make this work without that augmentation and I will report later here.

HUANGLIZI commented 1 year ago

Yes, you should disenable the RandCropByPosNegLabeld transformation in the "unsupervised" training.

thangngoc89 commented 1 year ago

@HUANGLIZI thanks. I've succesfully trained SwinMM on semi-supervised task by doing what you suggested. I replaced RandCropByPosNegLabeld with RandSpatialCropSamplesd for patch sampling and observed improvements on test set over fully supervised on limited labelled dataset.