DAMO-NLP-SG / Video-LLaMA

[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
BSD 3-Clause "New" or "Revised" License
2.7k stars 243 forks source link

How To: Use hugging face checkpoints downloaded on a CentOS machine #148

Closed joysl closed 5 months ago

joysl commented 5 months ago

Hi Team, I'm trying to locally reference a llama model that has been shared on the hugging face repository. However I get the error could not parse ModelProto from while the code references this file. I'm trying this workflow out for the first time, maybe I'm referencing a wrong .model file. Does anyone have tips about how to use model checkpoints or can point me to resources/workshops that do something similar? Thanks in advance

joysl commented 5 months ago

It's fixed now, I had the cloned the git lfs pointer instead of the actual model fie

jopaky commented 4 months ago

cloned the git lfs pointer instead of the actual model fie Dear author, could you explain how you cloned the git lfs pointer or the command you used? Thanks in advance

joysl commented 4 months ago

https://stackoverflow.com/questions/74981712/clone-git-repo-with-all-lfs-objects

jopaky commented 4 months ago

https://stackoverflow.com/questions/74981712/clone-git-repo-with-all-lfs-objects

Thanks for your reply. But I still met an error as follows: RuntimeError: Internal: could not parse ModelProto from /home/kunpeng/Video-LLaMA/ckpt/Llama-2-13b-chat-hf/tokenizer.model

My video_llama_eval_withaudio.yaml file was set as: llama_model: '/home/kunpeng/Video-LLaMA/ckpt/Llama-2-13b-chat-hf'

Could you do me a favor check if it is correct?

Thanks