Closed Marrywithyou closed 10 months ago
Hi. Yes, the number of folders represents the number of classes that you have, which is the default for pytorch dataloaders.
If you have unlabelled data, you can just dump all of it into a single folder. After that you need to set no_labels=True
(there's an example here scripts/pretrain/custom/byol.yaml
).
You won't be able to run any linear classifier or anything like that, because you don't have classes to evaluate on, but you will have a model that should produce good features wrt your data.
Thank you very much for your reply. I have one more question According to the above way of training, although the process of linear evaluation is carried out, my model has been trained according to the above method for a long time and has completed the training. If the linear evaluation and self-supervised training here are separate, I just want to use the weight of the backbone network here for downstream tasks, can I directly use it
Yes, your backbone was trained fine.
Hi, I'm having some problems with the process of using self-supervised pre-training. Use background: I am using a large unlabeled data set for validation, but to match the requirements of the framework, I use the data format imagenet-100, I randomly divide the data set into the training set and the validation set, and the following three folders ABC evenly. Situations encountered: In the process of self-supervised training, I found that the train_acc1_step index was related to the number of folders. The index result of three folders was basically 0.33, and the index result of 50 folders was 0.02. Questions to ask: Does the number of folders affect the pre-training of the self-supervised model? @vturrisi
I am not sure if my training is normal and reliable, I hope it can help me, thank you very much