The version of llama-cpp-python this project uses is quite old. Therefore I get a lot of errors regarding versions of GGML models. It also doesn't support GGUF models.
I would suggest to up the version of llama-cpp-python to the latest one.
GGUF models are the future anyway it would seem.
Perhaps we could work on something that allows chosing ?
The version of llama-cpp-python this project uses is quite old. Therefore I get a lot of errors regarding versions of GGML models. It also doesn't support GGUF models.
I would suggest to up the version of llama-cpp-python to the latest one.
GGUF models are the future anyway it would seem. Perhaps we could work on something that allows chosing ?