Open Puqi7 opened 1 week ago
Hi, thank you! I appreciate it.
To get started, please download llama-2-13b-chat
from the official repository by following these download instructions. Once downloaded, place both the model and tokenizer files in the libs/llama directory.
Let me know if this helps!
Thanks a lot! I have solved this.
I downloaded the llama-2-13b-chat, and put both files in the right position.
I've met the following errors in 04_query_llm.sh. It seems like there is a mismatch between the shape of the parameters defined in your model and those expected by the checkpoint being loaded. May I ask how you solved this?
Hi! I noticed that you are using only one GPU, but you need two GPUs to run the llama-2-13b-chat model. If you have access to only one GPU, you should switch to the llama-2-7b-chat model. However, please note that you may need to tweak the prompt, as it may not work as well.
Thanks for your wonderful work and clear and clean open-source code.
I met a question when running 04_query_llm.sh. Then I found that in the libs/llama/ there is no llama-2-13b-chat. May I know how you do this?
Thanks!