-
Due to the huge size of original dataset, I extracted images from the original videos with FPS=1, and trained the CLIP4clip(meanP) on 8 RTX3090. Due to the GPU memory constrain, I set the gradient_acc…
-
Hello, I'm looking for QA msvd json file. I found retrieval msvd json file in [here](https://github.com/OpenGVLab/unmasked_teacher/issues/12), but where is the one for QA?
-
您好!我想用您的代码复现一下MSVD数据集的结果,下图是用l16_25m微调后的模型测试的结果。与论文结果不一致,应该是数据划分不一样导致的。MSVD数据集的文本标注原本是一对多,但是根据论文Table28对MSVD数据集的描述,对于每个视频,您只从对应的文本标注取了一个文本吗?能否提供您的标注json文件?谢谢!
![image](https://github.com/OpenGVLab/un…
-
First of all, thank you for sharing your good research.
There are only MSRVTT and VATEX datasets training parameters in **scripts**, and ActivityNet, DiDeMo, and LSMDC are not available. Can you pro…
-
Hi, I'm trying to reproduce the CLIP-ViP result. In the readme file, it is mentioned that the data preprocessing step follows HD-VILA. However, in the [configuration files](https://github.com/microsof…
-
We run the code on the MSVD dataset, but the code is exited without any errors and hints. The author can give me some help. Thanks very much!
-
Hello, @antoyang
I'm trying to build an activitynet dataset for the vid2seq model. So while looking at your repository([FrozenBiLM](https://github.com/antoyang/FrozenBiLM)), I noticed that activit…
-
Hi, is there any dataset for LSMDC Filling the blank task?
-
Hi! I am trying zeroshot inference with the code below
```shell
DATA_DIR=data
DATASET=activitynet
DATASET_FILE=ActivityNet-QA
CKPT_PATH=checkpoints/frozenbilm_activitynet.pth
TRANSFORMERS_CACH…
-
Hi, I have read your paper "FrozenBiLM". I have several question about the preprocess of LSMDC-FiB dataset. Since I noticed that there are some blanks only contains a part of the word. For example "I …