MCG-NJU / VideoMAE

[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
https://arxiv.org/abs/2203.12602
Other
1.32k stars 133 forks source link

Please confirm whether the procedure of preprocssing are done twice in run_mae_pretraining.py ? #10

Closed peiliu0408 closed 2 years ago

peiliu0408 commented 2 years ago

Like the tile, I found that the videos are preprocessed twice, refer to: dataset and the train_one_epoch. can you help check it ?

yztongzhan commented 2 years ago

Hi @peiliu0408! Please refer to the dataset.py and engine_for_pretraining.py in MAE-pytorch, or the official implementation.