-
I was trying to set up `vid2seq` for evaluation, but on running the command :
```
python -m scenic.projects.vid2seq.main \
--config=scenic/projects/vid2seq/configs/youcook2.py \
`--work…
-
hi @antoyang ,
Thanks for the great work.
I am trying to run the code for videos without any transcripts and interested in reproducing the result of `Row#3` in `Table 2` i.e. Pretraining with vi…
-
Hi,
I am trying to reproduce [MMPT (videoclip)](https://github.com/facebookresearch/fairseq/tree/main/examples/MMPT). However, I can't find how to download 'data/youcook/youcook_val.pkl'. Can you p…
-
Hi, thanks for the great source.
I am trying to do train with the youcook dataset, and facing the error related to gpu resource.
I am using 3080Ti 10GiB single gpu, and modify the training command…
-
Very good job, benefited me a lot.
But when I downloaded the QVHighlights dataset, the speed was very slow, about 20kb/s.
How can I easily obtain this dataset?
Can you upload QVHighlights to Google…
-
Dear authors, I have evaluated your checkpoint on Youcook2. However, the results are much lower than what you reported. Is this correct?
### Checkpoint Videoclip
**Clip-caption**
30fps R@1: 0.2…
-
@antoyang @a-nagrani Dear authors, thanks for the great work. Could you please provide the transcribed ASR data of the YouCook2 and AcitivityNet Captions datasets you used in the experiments so that…
-
Hi, thank you for sharing the code and models.
I have used the ckpt_violet_pretrain.pt and ckpt_violet_msrvtt-retrieval with our data processing (5 frames with interval num_frames // 5) for msrvtt …
-
Hi, thanks for the great sources.
I successfully fine-tuned video captioning model for Youcook2, and now trying to input video file as a input and get the caption of it.
But when I see the code, it …
-
Is it possible to provide the YouCook2 features? Or could you provide the script/code to extract features from the YouCook2 dataset or the configuration of C3D/Clip_b16?
Thank you so much for the …