-
Thanks for your impressive work, I have a question to evaluate video-text retrieval: In datasets such as MSVD and MSRVTT, each video is attached with multiple captions. How do you process this problem…
-
Hello, I am interested in this work and excited to see the perfect performance of this work. Though the code has many scripts to extract features the model needs. I'm worried that the features are so …
-
Thanks for sharing your work!!!
I tested the test code provided in the `README.md` on msrvtt-1kA and obtained the following results:
```
07/18/2023 17:38:05 - INFO - main - ====-zero-shot evalu…
-
Hi @tsujuifu,
Thanks for your great work and tidy github for future works!
I found that MSRVTT-MC _train_- and _val_-datasets are missed in google drive, and also same for the checkpoint.
It woul…
-
I saw you privide the single GPU training command, and run it successfully.
But I got some troubles to use multi-GPU training.Can you privide the multi-GPU training command such as on the msrvtt data…
-
I have only found the caption files for MSRVTT in the releases. When will the caption files for other datasets (MSVD, VATEX etc.) be provided?
-
Hi~ Thanks for the excellent work.
So now the codes about **qurey gated transformation** and **Informative Context Enhancement** are not in the repo? Should I add my own code implementation?
If so,…
-
Hi, may I know what is the file used in "--gt_file"? Is it train, test or val.json of MSVD-QA?
I want to replicate the result shown in paper.
python run_inference_qa_msvd.py \
--cfg-path eval_…
-
Hi, thank you for sharing the code and models.
I have used the ckpt_violet_pretrain.pt and ckpt_violet_msrvtt-retrieval with our data processing (5 frames with interval num_frames // 5) for msrvtt …
-
Hi, I found that in your msrvvt_config, train9k.jsonl and test1ka.jsonl are needed. But I don't find anything about it in your readme.md. Are they in the hdvila_ofa_captions_db? If the jsonl files are…