Open menna1012 opened 4 months ago
Hey @menna1012
Interesting and very valid question. Generally this is not possible out of the box and you would need to manually alter the code for self-supervised pretraining. What you could do however to leverage the unlabelled images is the following:
Hope this helps!
Hi,
I have about 500 labelled images and 1500 unlabelled images. I wonder if it is possible to pretrain a model using those unlabelled images and then use the pretrained weights to initialize the model to train on the labelled dataset.
I followed those steps: https://github.com/Kent0n-Li/nnSAM/blob/main/documentation/pretraining_and_finetuning.md but stuck in the steps (nnUNetv2_extract_fingerprint and nnUNetv2_move_plans_between_datasets) because labels are required ("labels" field in dataset.json file and labelled images in the labelsTr folder.
Is there a way to utilize those unlabelled dataset to improve overall training results on the labelled one? or nnUNet must work only on labelled dataset?
Thanks in advance for your response.