Open linxid opened 3 months ago
I downloaded both the QA pairs and videos from https://mbzuai-oryx.github.io/Video-ChatGPT/
Then, create a folder structure within DATAS
as follows
DATAS/
└── VCGBench/
├── Videos/
│ └── Benchmarking
└── Zero_Shot_QA
Then, place the QA json files in Zero_Shot_QA/
and place the videos in Benchmarking/
.
I deduced the structure from code at https://github.com/magic-research/PLLaVA/blob/main/tasks/eval/vcgbench/__init__.py#L272-L301
I've uploaded the evaluation data here: https://huggingface.co/datasets/ermu2001/PLLaVATesting/tree/main/DATAS
You can follow the instructions here on the dev branch to prepare this data directly. Also, I recommend switching to dev as we fixed some bug there.
thanks
Hello! I would also like to test the benchmark for video_chatgpt, but it need GPT assistance. Could you tell me approximately how much it costs to test the video_chatgpt benchmark once?
This is a amazing work. I try to evaluate model performance. In Video-ChatGPT dataset, I only find these data. How can I get test_q.json and test_a.json dataset.