simonw / llm-gpt4all

Plugin for LLM adding support for the GPT4All collection of models
Apache License 2.0
194 stars 19 forks source link

FileNotFoundError: Model file does not exist when no models have been downloaded #23

Closed b421 closed 2 months ago

b421 commented 3 months ago

Running llm models list after installing the llm-gpt4all plugin in a clean pipx environment returns this:

$ llm models list
Traceback (most recent call last):
  File "/Users/b421/.local/bin/llm", line 8, in <module>
    sys.exit(cli())
             ^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/llm/cli.py", line 799, in models_list
    for model_with_aliases in get_models_with_aliases():
                              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/llm/__init__.py", line 80, in get_models_with_aliases
    pm.hook.register_models(register=register)
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/pluggy/_hooks.py", line 501, in __call__
    return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/pluggy/_manager.py", line 119, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/pluggy/_callers.py", line 138, in _multicall
    raise exception.with_traceback(exception.__traceback__)
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/pluggy/_callers.py", line 102, in _multicall
    res = hook_impl.function(*args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/llm_gpt4all.py", line 57, in register_models
    models.sort(
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/llm_gpt4all.py", line 59, in <lambda>
    not model.is_installed(),
        ^^^^^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/llm_gpt4all.py", line 179, in is_installed
    GPT4All.retrieve_model(
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/llm_gpt4all.py", line 38, in retrieve_model
    return _GPT4All.retrieve_model(model_name, model_path, allow_download, verbose)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/b421/.local/pipx/venvs/llm/lib/python3.12/site-packages/gpt4all/gpt4all.py", line 273, in retrieve_model
    raise FileNotFoundError(f"Model file does not exist: {model_dest!r}")
FileNotFoundError: Model file does not exist: PosixPath('/Users/b421/.cache/gpt4all/mistral-7b-openorca.gguf2.Q4_0.gguf')

It looks like this is due to this commit in gpt4all from last week: https://github.com/nomic-ai/gpt4all/commit/255568fb9a5201c0e9d9d2679772392784448f89

whereretrieve_model() now returns FileNotFoundError if the $HOME/.cache/gpt4all folder is empty

Hacky fix in is_installed():

    def is_installed(self):
        try:
            GPT4All.retrieve_model(
                self._details["filename"], allow_download=False, verbose=False
            )
            return True
-       except ValueError:
+       except (FileNotFoundError, ValueError):
            return False
endquote commented 3 months ago

I get this error on llm models list, but also after running the example from the readme after a fresh install: llm -m orca-mini-3b-gguf2-q4_0 '3 names for a pet cow'

jack4git commented 3 months ago

Confirming the same for original issue and comment

shailja-imw commented 3 months ago

@b421 Not working in my case. Is there any update on this. I am also facing same issue

YoungPhlo commented 3 months ago

Thanks for the tip! Adding except (FileNotFoundError, ValueError): worked for me

pu-007 commented 2 months ago

in the newest version of gpt4all, it still exists. i solve it by hacking retrieve_model (gpt4all.py, line 399)

        model_dest = model_path / model_filename
        if model_dest.exists():
            config["path"] = str(model_dest)
            if verbose:
                print(f"Found model file at {str(model_dest)!r}", file=sys.stderr)
        elif allow_download:
            # If model file does not exist, download
            filesize = config.get("filesize")
            config["path"] = str(
                cls.download_model(
                    model_filename,
                    model_path,
                    verbose=verbose,
                    url=config.get("url"),
                    expected_size=None if filesize is None else int(filesize),
                    expected_md5=config.get("md5sum"),
                )
            )
        else:
            # raise FileNotFoundError(f"Model file does not exist: {model_dest!r}")
            # remove it, do the same thing as above
            filesize = config.get("filesize")
            config["path"] = str(
                cls.download_model(
                    model_filename,
                    model_path,
                    verbose=verbose,
                    url=config.get("url"),
                    expected_size=None if filesize is None else int(filesize),
                    expected_md5=config.get("md5sum"),
                )
            )