Open saul-jb opened 1 year ago
Thanks for this!
I looked it up and tried to build the new gpt4all-backend.
From quick look it seems it only supports dynamic linking unlike this project. There's a good reason for dynamic linking because then one does not have to have separate builds for avx1 and avx2. And it supports multiple llama versions at the same time. But that also means the binary cannot be compiled on one machine and just trust it works on another.
With this project one is now unfortunately stuck with the old format.
It would be good to leave this issue open so that people know that it does not work with the new ggml formats.
GPT4All uses a newer version of llama.cpp which can handle the new ggml formats. Currently this throws an error similar to the following if you attempt to load a model of a newer version: