simonw / llm-gpt4all

Plugin for LLM adding support for the GPT4All collection of models
Apache License 2.0
213 stars 19 forks source link

Local models don't work without internet connection #10

Closed FergusFettes closed 8 months ago

FergusFettes commented 1 year ago
~> llm -m orca-mini-3b 'say "hello world"'
Hello, world!
~>*here i turned off my wifi*
~> llm -m orca-mini-3b 'say "hello world"'
Error: HTTPSConnectionPool(host='gpt4all.io', port=443): Max retries exceeded with url: /models/models.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7ff321990610>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))

Would be awesome if the offline models (and the mpt ones?) would work without a network connection..

simonw commented 1 year ago

This is supposed to work. IMoving this to the llm-gpt4all repo.

simonw commented 1 year ago

This looks to me like a bug in this code: https://github.com/simonw/llm-gpt4all/blob/0046e2bf5d0a9c369b804d7125a1ab50bd5878f1/llm_gpt4all.py#L160-L179

simonw commented 1 year ago

Could you make sure you're running the latest version of the plugin and try this again?

llm install -U llm-gpt4all

This bug should have been fixed here: https://github.com/simonw/llm-gpt4all/commit/32a50005da0171fcf68652f8446405d8c0a61868

FergusFettes commented 1 year ago

I ran that command but the result is the same.

$ llm plugins
[
  {
    "name": "llm.default_plugins.openai_models",
    "hooks": [
      "register_commands",
      "register_models"
    ]
  },
  {
    "name": "llm-replicate",
    "hooks": [
      "register_commands",
      "register_models"
    ],
    "version": "0.2"
  },
  {
    "name": "llm-gpt4all",
    "hooks": [
      "register_models"
    ],
    "version": "0.1.1"
  }
]
$ llm version
llm, version 0.8
rotterb commented 1 year ago

First of all, thanks for the great package, @simonw!

I ran into the same problem today. Seems like allow_download is set to True by default in class Gpt4All from package gpt4all:

https://github.com/nomic-ai/gpt4all/blob/b6e38d69eda9920f4fddb438093e02f88aa3cf60/gpt4all-bindings/python/gpt4all/gpt4all.py#L57-L70

So when retrieve_model is called, list_models is called too, which causes a request:

https://github.com/nomic-ai/gpt4all/blob/b6e38d69eda9920f4fddb438093e02f88aa3cf60/gpt4all-bindings/python/gpt4all/gpt4all.py#L118-L145

A workaround could be to add allow_download=False as parameter in Line 111 if the model is already downloaded:

https://github.com/simonw/llm-gpt4all/blob/0046e2bf5d0a9c369b804d7125a1ab50bd5878f1/llm_gpt4all.py#L104-L113

hydrosquall commented 10 months ago

Thanks for the investigation @rotterb , I encountered this issue on the train today, and it was nice to see you'd found the fix. I'm optimistic that this should do it:

https://github.com/simonw/llm-gpt4all/pull/18