getumbrel / llama-gpt

A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!
https://apps.umbrel.com/app/llama-gpt
MIT License
10.84k stars 701 forks source link

Update llama_cpp_python to fix issues in mac #131

Open DaramG opened 1 year ago

DaramG commented 1 year ago

There was some internal server error when running run-mac.sh on Mac. To fix this, I updated llama_cpp_python version to the latest version. This will resolve #73 and #95 .

henriquezago commented 11 months ago

It didn't solve my issue (#95).

adevart commented 7 months ago

This worked for me, thanks. I was having the same issue as #95. I updated the version number then restarted the server and it loaded the model ok.

I get a similar error when loading the 7b chat model but that's due to it being in .bin format instead of .gguf like code-7b and gives the following error, which shows the 500 loading error in the UI: gguf_init_from_file: invalid magic characters tjgg. error loading model: llama_model_loader: failed to load model from ./models/llama-2-7b-chat.bin

cairongquan commented 1 month ago

i am M1Pro base version,whitch version should i install by llama_cpp_python