linjieli222 / HERO

Research code for EMNLP 2020 paper "HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training"
https://arxiv.org/abs/2005.00200
MIT License
230 stars 34 forks source link

Using the pretrained hero on custom datasets #12

Closed PipiZong closed 3 years ago

PipiZong commented 3 years ago

Hello,

Thanks for sharing your work! Could the pretrained hero be used in Chinese datasets? If not, how long will it take to train a hero model with one GPU? Thanks!