facebookresearch / dino

PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
Apache License 2.0
6.25k stars 905 forks source link

Intermediate checkpoints? #259

Open ghost opened 11 months ago

ghost commented 11 months ago

Hi, thanks for this fantastic work!

Unfortunately, I am not able to reproduce the results (I am working with ViT-B/16). I have access to only 4 GPUs, so I was running the code with 128 total batch instead of 1024. As advised in other threads here, I tried to increase momentum but that did not help. Now, I was wondering if you could release some intermediate backbones, for epochs 1, 10, 20 and 100 (for ViT-B/16)?

Thanks in advance!