lucazanella / lavad

Official implementation of "Harnessing Large Language Models for Training-free Video Anomaly Detection", CVPR 2024
https://lucazanella.github.io/lavad/
58 stars 2 forks source link

Step4: 04_query_llm.sh AssertionError: no checkpoint files found in libs/llama/llama-2-13b-chat/ #16

Open Puqi7 opened 1 week ago

Puqi7 commented 1 week ago

Thanks for your wonderful work and clear and clean open-source code.

I met a question when running 04_query_llm.sh. Then I found that in the libs/llama/ there is no llama-2-13b-chat. May I know how you do this?

Thanks!

lucazanella commented 1 week ago

Hi, thank you! I appreciate it.

To get started, please download llama-2-13b-chat from the official repository by following these download instructions. Once downloaded, place both the model and tokenizer files in the libs/llama directory.

Let me know if this helps!

Puqi7 commented 1 week ago

Thanks a lot! I have solved this.

Puqi7 commented 1 week ago

Hi,

I downloaded the llama-2-13b-chat, and put both files in the right position.

I've met the following errors in 04_query_llm.sh. It seems like there is a mismatch between the shape of the parameters defined in your model and those expected by the checkpoint being loaded. May I ask how you solved this?

Screenshot from 2024-11-11 12-35-28 Screenshot from 2024-11-11 12-35-48

lucazanella commented 3 days ago

Hi! I noticed that you are using only one GPU, but you need two GPUs to run the llama-2-13b-chat model. If you have access to only one GPU, you should switch to the llama-2-7b-chat model. However, please note that you may need to tweak the prompt, as it may not work as well.