Closed kilkujadek closed 1 year ago
navigate to your llama.cpp folder and type:
git pull
navigate to your llama.cpp folder and type:
git pull
Nope, still same:
(lama2) kilku@debian:~/vicuna/llama.cpp$ git pull
Already up to date.
(lama2) kilku@debian:~/vicuna/llama.cpp$ ./main -m models/ggml-vic13b-uncensored-q5_1.bin -f 'prompts/chat-with-vicuna-v1.txt' -r 'User:' --temp 0.36
main: build = 523 (0737a47)
main: seed = 1684157106
llama.cpp: loading model from models/ggml-vic13b-uncensored-q5_1.bin
error loading model: unknown (magic, version) combination: 67676a74, 00000002; is this really a GGML file?
llama_init_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/ggml-vic13b-uncensored-q5_1.bin'
main: error: unable to load model
try to run make -j
again
make -j
did that as well
i have the same error, OS Linux Mint
same error. just freshly cloned the repo, freshly built, freshly downloaded the 7b model, and see the same error
Same error with the 13b model.
OK, i just updated my fork of llama.cpp, should work now! navigate to the llama.cpp folder and type
git pull
then:
make -j
Should work!
It is working now, thanks!
Hello,
Using One-line install seems to by successful (except few warnings):
But when I try to run it, it throwing an error: