Closed KeyaoZhao closed 1 year ago
Hi @KeyaoZhao, thanks for your attention to our work!
Sorry that I wrote a wrong command.
The --resume
should be replaced with --pretrained
, then the attn.attention_biases
will be interpolated.
python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/higher_resolution/tiny_vit_21m_224to384.yaml --data-path ./ImageNet --batch-size 32 --pretrained ./tiny_vit_21m_22kto1k_distill.pth --output ./output --accumulation-steps 4
How to finetune with higher resolution? When we finetune with higher resolution from 224 to 384, there will be error like:
Thanks a lot.