Open DaIGaN2019 opened 1 week ago
Hi @DaIGaN2019 - For the iBOT ViT-B/16 comparison, we used the default hyper-parameters here https://lf3-nlp-opensource.bytetos.com/obj/nlp-opensource/archive/2022/ibot/vitb_16/args.txt, so:
--out_im
and patch_out_im
equal to 8192norm_in_head
equal to None
--warmup_teacher_patch_temp
and --warmup_teacher_temp
equal to 0.04--teacher_patch_temp
and --teacher_temp
equal to 0.07Yes, the loss should decrease, and then spike up after `freeze_last_layer==3' epochs. The model is trained for 80 epochs of IN-22K training (translated into iterations), and we evaluate on the final model.
Thank you for your answer! It is of great help to me!!! Thank you again for your wonderful work.
Hello author, thank you for sharing such wonderful code. I couldn't find some parameter settings for the experiment in the paper (including supporting materials). Can you share more hyperparameter settings? Framework: iBOT arch:Vit-B/16 Unknown parameter: Are both -- out_im and -- patch_out_im equal to 8192?
And, may I ask if there is a situation where LOSS is Nan or if LOSS first decreases and then increases when conducting relevant experiments? The epoch you set during training is 80. Is the final training result used as the test model? Or choose a model saved in a certain epoch?
Thank you again for sharing! We look forward to your reply!