I have tried to convert llama 2 model from .gguf to .bin
~/llm_inferences/llama.cpp/models/meta$ ls
llama-2-7b.Q4_K_M.gguf
python3 export.py llama2_7b.bin --meta-llama /home/####/llm_inferences/llama.cpp/models
Traceback (most recent call last):
File "/home/aadithya.bhat/llm_inferences/llama2.c/export.py", line 559, in <module>
model = load_meta_model(args.meta_llama)
File "/home/aadithya.bhat/llm_inferences/llama2.c/export.py", line 373, in load_meta_model
with open(params_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/aadithya.bhat/llm_inferences/llama.cpp/models/params.json'
I have tried to convert llama 2 model from .gguf to .bin
I have downloaded this model from https://huggingface.co/TheBloke/Llama-2-7B-GGUF, the model with name ending with Q4_K.gguf