Open dv0zn3r opened 1 year ago
With Python gpt4all
1.0.8, I currently get the following models working out of the box with llm
+ llm-gpt4all
:
The following models give me the "Model filename not in model list" error out of the box:
However, I've found a workaround that works for some (all?). If you drop into a Python shell and run e.g.:
from gpt4all import GPT4All
model = GPT4All("ggml-vicuna-7b-1.1-q4_2.bin")
Using the filename from the exception, subsequent runs of llm
seem be able to use the cached model.
The Python source for the gpt4all
package is available here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python
Unexpected behaviour: For models
ggml-mpt-7b-base
,ggml-mpt-7b-instruct
,ggml-gpt4all-j-v1
, after the first prompt the model is not downloaded, instead an error occursError: Model filename not in model list: <model name>
.