nomic-ai / gpt4all

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
https://nomic.ai/gpt4all
MIT License
70.84k stars 7.71k forks source link

ValueError: Request failed: HTTP 403 Forbidden #3179

Closed switchedfabric2 closed 2 weeks ago

switchedfabric2 commented 2 weeks ago

Hi,

I'm getting a 403 when running the simplest of samples, eg:

---- 8< ---- from gpt4all import GPT4All model = GPT4All("Meta-Llama-3-8B-Instruct.Q4_0.gguf") # downloads / loads a 4.66GB LLM with model.chat_session(): print(model.generate("How can I run LLMs efficiently on my laptop?", max_tokens=1024)) ---- 8< ----

$ python3 p.py Traceback (most recent call last): File "/foo/p.py", line 2, in model = GPT4All("Meta-Llama-3-8B-Instruct.Q4_0.gguf") # downloads / loads a 4.66GB LLM ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/foo/lib/python3.12/site-packages/gpt4all/gpt4all.py", line 235, in init self.config: ConfigType = self.retrieve_model(model_name, model_path=model_path, allow_download=allow_download, verbose=verbose) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/foo/lib/python3.12/site-packages/gpt4all/gpt4all.py", line 310, in retrieve_model available_models = cls.list_models() ^^^^^^^^^^^^^^^^^ File "/foo/lib/python3.12/site-packages/gpt4all/gpt4all.py", line 280, in list_models raise ValueError(f'Request failed: HTTP {resp.status_code} {resp.reason}') ValueError: Request failed: HTTP 403 Forbidden $

wget -d https://gpt4all.io/models/models3.json works `fine```

switchedfabric2 commented 2 weeks ago

Solved, see https://stackoverflow.com/questions/51268405/curl-and-python-requests-get-reporting-different-http-status-code