Open malv-c opened 1 year ago
Hi.
I fixed that by downloading the model from here.
https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/tree/main
In my case I placed the file llama-2-7b-chat.ggmlv3.q8_0.bin into the "models" folder.
ok thanks
Le jeu. 10 août 2023 à 11:47, Alexandros Filothodoros < @.***> a écrit :
Hi.
I fixed that by downloading the model from here.
https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/tree/main
In my case I placed the file llama-2-7b-chat.ggmlv3.q8_0.bin into the "models" folder.
— Reply to this email directly, view it on GitHub https://github.com/kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference/issues/15#issuecomment-1672907157, or unsubscribe https://github.com/notifications/unsubscribe-auth/AESIHJMOTA2CUMC2LYPMD4DXUSU4TANCNFSM6AAAAAA3FC66UQ . You are receiving this because you authored the thread.Message ID: <kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference/issues/15/1672907157 @github.com>
File "/home/void/.local/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status response.raise_for_status() File "/home/void/.local/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/models/models/llama-2-7b-chat.ggmlv3.q8_0.bin/revision/main