OpenGVLab / VideoMAEv2

[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
https://arxiv.org/abs/2303.16727
MIT License
524 stars 63 forks source link

[Feature] Support pretraining with PyTorch 2.0 #16

Closed congee524 closed 1 year ago

congee524 commented 1 year ago

PyTorch 2.0 is very good at accelerating model training and reducing video memory usage.

The compatibility of PyTorch 2.0 with Deepspeed is unknown, so there are no plans to support fine-tuning with the compilation features of PyTorch 2.0.