OpenGVLab / VideoMAEv2

[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
https://arxiv.org/abs/2303.16727
MIT License
527 stars 63 forks source link

Starting the pretraining from checkpoint.. #33

Closed SushantGautam closed 1 year ago

SushantGautam commented 1 year ago

Can I just do: torchrun --standalone --nproc_per_node=${NGPU} run_mae_pretraining.py . . . .
--model pretrain_videomae_giant_patch14_224 with --resume checkpoints/vit_g_hybrid_pt_1200e.pth ??

This gave: Error(s) in loading state_dict for OptimizedModule:

Can you share the vit_base_patch16_224 checkpoint after 1200e?

congee524 commented 1 year ago

Please follow our tutorial carefully. The command you are using seems to be incorrect.

SushantGautam commented 1 year ago

Yes, I could fix it.