Open ApoorvFrontera opened 2 months ago
Hi Team Any response or help for this will be very useful.
Thanks in advance.
I am having the same issus
same issues here.
same issues
same issues
Hello,I'm a phD student from ZJU, I also use videollama2 to do my own research,we create a WeChat group to discuss some issues of videollama2 and help each other,could you join us? Please contact me: WeChat number == LiangMeng19357260600, phone number == +86 19357260600,e-mail == liangmeng89@zju.edu.cn.
Hi Team,
When I am loading the Mixtral-based SFT MoE model 'DAMO-NLP-SG/VideoLLaMA2-8x7B' using the same inference code provided in the README.md, the following error is raised:
I tried to find the reason for this and came across the following issues where the main reason was that the file/weights saved were not done correctly and had empty dictionaries which the safetensors can't handle. https://github.com/huggingface/transformers/issues/27397#issuecomment-1806063673
To solve this, there are some changes to be implemented before saving the model checkpoint by you guys: