[ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted for spatiotemporal video representation. We also introduce a rigorous 'Quantitative Evaluation Benchmarking' for video-based conversational models.
As I read the paper, I couldn't find any training dataset except ActivityNet 100K pairs for instruction tuning.
What training-dataset do you use for evaluating zero-shot ActivityNet-QA?
Hi!
As I read the paper, I couldn't find any training dataset except ActivityNet 100K pairs for instruction tuning. What training-dataset do you use for evaluating zero-shot ActivityNet-QA?