OpenGVLab / Ask-Anything

[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
https://vchat.opengvlab.com/
MIT License
2.86k stars 230 forks source link

Multiple Video-Text pair Support #78

Closed mustafaadogan closed 5 months ago

mustafaadogan commented 7 months ago

Hello!

First of all, I'd like to congratulate you on your great work. I have a question: I'm looking to evaluate the model's performance in a different way by using in-context examples. Specifically, I'm interested in feeding the model multiple in-context video-text examples. Is it possible to do so?

Andy1621 commented 6 months ago

Yes. I have tried to conduct in-context inferring but found it hard to follow instructions. I doubt that it may be because of the lack of similar tuning data. If you are interested, maybe you can follow MIMIC to design the data.

yinanhe commented 5 months ago

Due to the lack of updates for a long time, your issue has been temporarily closed. If you still have any problems, please feel free to reopen this issue.