Closed FergusFettes closed 8 months ago
This is supposed to work. IMoving this to the llm-gpt4all
repo.
This looks to me like a bug in this code: https://github.com/simonw/llm-gpt4all/blob/0046e2bf5d0a9c369b804d7125a1ab50bd5878f1/llm_gpt4all.py#L160-L179
Could you make sure you're running the latest version of the plugin and try this again?
llm install -U llm-gpt4all
This bug should have been fixed here: https://github.com/simonw/llm-gpt4all/commit/32a50005da0171fcf68652f8446405d8c0a61868
I ran that command but the result is the same.
$ llm plugins
[
{
"name": "llm.default_plugins.openai_models",
"hooks": [
"register_commands",
"register_models"
]
},
{
"name": "llm-replicate",
"hooks": [
"register_commands",
"register_models"
],
"version": "0.2"
},
{
"name": "llm-gpt4all",
"hooks": [
"register_models"
],
"version": "0.1.1"
}
]
$ llm version
llm, version 0.8
First of all, thanks for the great package, @simonw!
I ran into the same problem today. Seems like allow_download
is set to True
by default in class Gpt4All
from package gpt4all:
So when retrieve_model
is called, list_models
is called too, which causes a request:
A workaround could be to add allow_download=False
as parameter in Line 111 if the model is already downloaded:
Thanks for the investigation @rotterb , I encountered this issue on the train today, and it was nice to see you'd found the fix. I'm optimistic that this should do it:
Would be awesome if the offline models (and the mpt ones?) would work without a network connection..