Closed AmitRozner closed 9 months ago
It's mainly because you do not use the right vicuna-v0. Please follow the right steps to handle it. Similar issues can be found in https://github.com/Vision-CAIR/MiniGPT-4/issues/12
Thanks, this solved the issue.
Hi, I tried to install VideoChat2 locally and use your demo via demo.py and demo.ipynb. The models are loading and inference is happening but I get garbage as answer (for the demo imges/videos as well). For example:
I tried both video and image but the same happens. I am using the following models:
And in demo.py:
state_dict = torch.load("./videochat2_7b_stage3.pth", "cpu")
I am using Ubuntu 20.04, Python 3.9, CUDA 11.8. Any clue why could this happen?