linjieli222 / HERO

Research code for EMNLP 2020 paper "HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training"
https://arxiv.org/abs/2005.00200
MIT License
230 stars 34 forks source link

Checkpoint for TVQA task #27

Closed raksharamesh14 closed 3 years ago

raksharamesh14 commented 3 years ago

Hi, Thanks for sharing the code. Where can I download the model after it is finetuned for QA task (i.e., tvqa_default according to the train-tvqa-8gpu.json config )? I'd like to use the finetuned checkpoint for inference on a custom QA dataset directly. Thanks.

linjieli222 commented 3 years ago

@raksharamesh14

Thanks for your interests in our project. Sorry about the slow response.

Here is the pre-trained checkpoint on TVQA: https://convaisharables.blob.core.windows.net/hero/finetune/tvqa_default.pt

Note that this is from a reproduced experiment, so the numbers may be close to the reported results in the paper but not exactly the same.

linjieli222 commented 3 years ago

Closed due to inactivity.