THUDM / CogVideo

Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
Apache License 2.0
3.54k stars 378 forks source link

How many frames (seconds) are there in each video sample used in the training process? #23

Closed BinZhu-ece closed 1 year ago

BinZhu-ece commented 1 year ago

How many frames (seconds) are there in each video sample used in the training process? Is it the same as the output sample of the 4-second clip of 32 frames? What‘s the video length in the dataset used for your training? Did you directly use the complete video or slice the video?

wenyihong commented 1 year ago

The video sample used in the training process is of multiple frame rates, including 1, 2, 4, 8 fps. Due to the limitation of GPU memory and the large scale of CogVideo model, each model can process 5 frames at the same time. The video lengths in our dataset range from 1 sec to over 30 sec. We use the complete video as far as possible to maintain the alignment between video and text in the training set, but may slice the video when it is very long.