THUDM / CogVideo

Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
Apache License 2.0
3.54k stars 378 forks source link

Demonstration data #21

Closed zhoudaquan closed 1 year ago

zhoudaquan commented 1 year ago

Thanks for the amazing work!

can I check where does the demonstration dataset come from? Is there any part publicly available?

thanks.

wenyihong commented 1 year ago

Hi, sorry for the late response. What do you mean by demonstration dataset? If it refers to the train set, you can use WebVid as an alternative, which contains 10M text-video pairs.