OpenGVLab / VideoMAEv2

[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
https://arxiv.org/abs/2303.16727
MIT License
493 stars 56 forks source link

Request for the training script for VideoMAE-V2-Base #34

Closed qinghuannn closed 6 months ago

qinghuannn commented 1 year ago

Hi. Thanks for your nice work! I need to train VideoMAE-V2-Base on the Kinetics-400 dataset, but I didn't find the training script for VideoMAE-V2-Base. And I didn't find the performance of VideoMAE-V2-Base on the Kinectics-400 dataset in your paper. Could you tell me the performance of VideoMAE-V2-Base on the Kinectics-400 dataset?

JinChow commented 1 year ago

Hello @qinghuannn ,have you successfully run the code? I have met some trouble,I would appreciate it if you can help me!

qinghuannn commented 1 year ago

No. I haven’t run the code since I didn’t find the  training script for Video-MAE-V2. You can post your questions on this repo, the authors will help you.在 2023年8月5日,14:08,Jin-Chow @.***> 写道: Hello @qinghuannn ,have you successfully run the code,I have met some trouble,I would appreciate it if you can help me!

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>

congee524 commented 1 year ago

Thanks for your attention. To train vit model with VideoMAE V2 method on K400, you only need to modify the dataset root of training script.