Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
20.03k
stars
2.23k
forks
source link
invalid magic number 67676a74 error loading model: llama_model_loader: failed to load model from /root/.cache/huggingface/hub/models--TheBloke--Llama-2-7B-Chat-GGML/snapshots/00109c56c85ca9015795ca06c272cbc65c7f1dbf/llama-2-7b-chat.ggmlv3.q4_0.bin llama_load_model_from_file: failed to load model #464
I get the following error when running docker run -it --mount src="$HOME/.cache",target=/root/.cache,type=bind --gpus=all localgpt.
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp
__root__
Could not load Llama model from path: /root/.cache/huggingface/hub/models--TheBloke--Llama-2-7B-Chat-GGML/snapshots/00109c56c85ca9015795ca06c272cbc65c7f1dbf/llama-2-7b-chat.ggmlv3.q4_0.bin. Received error (type=value_error)
I get the following error when running
docker run -it --mount src="$HOME/.cache",target=/root/.cache,type=bind --gpus=all localgpt.