Can I just do:
torchrun --standalone --nproc_per_node=${NGPU} run_mae_pretraining.py . . . .
--model pretrain_videomae_giant_patch14_224
with --resume checkpoints/vit_g_hybrid_pt_1200e.pth ??
This gave: Error(s) in loading state_dict for OptimizedModule:
Can you share the vit_base_patch16_224 checkpoint after 1200e?
Can I just do: torchrun --standalone --nproc_per_node=${NGPU} run_mae_pretraining.py . . . .
--model pretrain_videomae_giant_patch14_224 with --resume checkpoints/vit_g_hybrid_pt_1200e.pth ??
This gave: Error(s) in loading state_dict for OptimizedModule:
Can you share the vit_base_patch16_224 checkpoint after 1200e?