MCG-NJU / VideoMAE

[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
https://arxiv.org/abs/2203.12602
Other
1.36k stars 135 forks source link

pretrained models on ImageNet1K #50

Closed qzhai closed 2 years ago

qzhai commented 2 years ago

Hi,

Thank you for sharing the exciting work!

Could you please release the pre-trained models on ImageNet1K(with input_size 224x224 or 320x320)?

Many many thanks!

yztongzhan commented 2 years ago

Hi @zhaiqx! Please refer to MAE-pytorch or mae for the pre-trained models on ImageNet1K.