Tested on llama-cli from llama-b3639-bin-win-avx2-x64.zip, model mini-magnum-12b-v1.1.Q8_0.gguf, worked correctly previously. Log: main.log
UPD: I've tested earlier releases, and avx2 bulids down to b3590 fail with the same error. This is weird, because I have 64GB RAM, which should be more than enough.
Name and Version
main: build = 3639 (20f1789d)
OS: Windows 10
What operating system are you seeing the problem on?
What happened?
After https://github.com/ggerganov/llama.cpp/commit/231cff5f6f1c050bcb448a8ac5857533b4c05dc7 I'm getting errors with my app, so I decided to test compiled releases - and unable to even load the model.
Tested on llama-cli from
llama-b3639-bin-win-avx2-x64.zip
, modelmini-magnum-12b-v1.1.Q8_0.gguf
, worked correctly previously. Log: main.logUPD: I've tested earlier releases, and avx2 bulids down to
b3590
fail with the same error. This is weird, because I have 64GB RAM, which should be more than enough.Name and Version
main: build = 3639 (20f1789d) OS: Windows 10
What operating system are you seeing the problem on?
Windows
Relevant log output