kennethleungty / Llama-2-Open-Source-LLM-CPU-Inference

Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
https://towardsdatascience.com/running-llama-2-on-cpu-inference-for-document-q-a-3d636037a3d8
MIT License
950 stars 212 forks source link

error model config #15

Open malv-c opened 1 year ago

malv-c commented 1 year ago

File "/home/void/.local/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status response.raise_for_status() File "/home/void/.local/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/models/models/llama-2-7b-chat.ggmlv3.q8_0.bin/revision/main

alexfilothodoros commented 1 year ago

Hi.

I fixed that by downloading the model from here.

https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/tree/main

In my case I placed the file llama-2-7b-chat.ggmlv3.q8_0.bin into the "models" folder.

malv-c commented 1 year ago

ok thanks

Le jeu. 10 août 2023 à 11:47, Alexandros Filothodoros < @.***> a écrit :

Hi.

I fixed that by downloading the model from here.

https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/tree/main

In my case I placed the file llama-2-7b-chat.ggmlv3.q8_0.bin into the "models" folder.

— Reply to this email directly, view it on GitHub https://github.com/kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference/issues/15#issuecomment-1672907157, or unsubscribe https://github.com/notifications/unsubscribe-auth/AESIHJMOTA2CUMC2LYPMD4DXUSU4TANCNFSM6AAAAAA3FC66UQ . You are receiving this because you authored the thread.Message ID: <kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference/issues/15/1672907157 @github.com>