Open EvolvingSoftware opened 4 months ago
So easy to update and I'll just leave the details here so I can find it again if I need to. I took the latest release from here: https://github.com/ggerganov/llama.cpp/releases/tag/b3358
I used the ARMx64 version (but not really sure about the latest Mac architectures to know if this is the best choice over the x64 version but it seems to work well on my Mac Air M3). Then extracted the llama-server and copied over to the Freechat project and renamed it freechat-server.
Gemma2 (https://huggingface.co/bartowski/gemma-2-9b-it-GGUF/blob/main/gemma-2-9b-it-Q6_K.gguf) started working immediately after swapping the server to the latest version. Will try with Deepseek Coder later (https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF)
glad you figured this out, sorry i missed it! I try to do this manually about monthly but it would be so clutch to have a bot do this.
will leave this open until the next time i update (maybe this weekend!)
Is it straightforward / easy / to update the version of LLaMa.cpp being used? Is it something we could enable the end user to do via the UI?