-
Hello, I want to reproduce your code results, but many imported packages are missing in the code header, such as YouTube_ dataloader、youcook_ dataloader、msrvtt_ Dataloader、lsmdc_dataloader、model_kmean…
-
The youcook2 data repository (http://youcook2.eecs.umich.edu/download) only provides a script to download the raw videos into a folder `.../youcook2/raw_videos/`. However, the entries in the `youcook_…
-
Great Work! I was hoping to quickly get the raw video dataset used and try to train a videochat2, how could I get a filtered raw video dataset rather than downloading all the video datasets that the d…
-
Hi, thank you for sharing the model.
For the evaluation the line suggest to use 6144 for the embedding:
python eval.py --eval_msrvtt=1 --eval_youcook=1 --eval_lsmdc=1 --num_thread_reader=8 --embd_di…
-
> Most of the videos are common and relatively easy to obtain. For some datasets that are more difficult to access, for EgoQA videos, download them from this [link](https://pjlab-gvm-dat…
-
Hi, as mentioned in the documentation on [vid2seq](https://github.com/google-research/scenic/tree/main/scenic/projects/vid2seq)
> Note that because this project relies on Scenic train_lib_deprecate…
-
Hi, thanks for your great work of VideoChat2!
I tried to organize the Ego4d dataset used in the paper. But I found that there are several splits for each video, and the split information is unavail…
-
In new v1.5 version of https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/README.md
there are links to new dataset annotation files such as
`huggingface-cli download mit-han-lab/vi…
-
bash eval.sh
启动脚本如下:
```
#!/bin/bash
DIR="VTG-LLM"
MODEL_DIR="/home1/lw/fyy/VTG-LLM/vtgllm.pth"
# TASK='dvc'
# ANNO_DIR='data/VTG-IT/dense_video_caption/Youcook2'
# VIDEO_DIR='data/youco…
-
**Describe the bug**
Audio-Webui does not install the requirements properly, precisely on audiolm, saying it failed to install.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'audio-…