Closed rahulaga closed 1 year ago
I thought I was going crazy or that it was something with local machine, but it was happening on modal too.
This did start happening after I updated to today's release: gpt4all==0.3.0
Just for some -- probably unnecessary -- context I only tried the ggml-vicuna*
and ggml-wizard*
models, tried with setting model_type
, allowing downloads and not allowing downloads, and confirmed the hash of the downloaded model files before I ran out of time on lunch.
@stub.cls()
class GPT4AllModel:
def __enter__(self):
import gpt4all
self.model = gpt4all.GPT4All(
"ggml-vicuna-13b-1.1-q4_2.bin", allow_download=True
)
@modal.method()
def generate(self, text: str):
return self.model.chat_completion([{"role": "user", "content": text}])
ValueError: Unable to instantiate model
Thanks for the tip! I just downgraded to 0.2.3 and it works now.
Just for completeness, what system are you on, if I may ask? If it's Linux, what distro and version?
I'm doing a few tests on Windows now with gpt4all==0.3.0
from PyPI. All of these are "old" models from before the format change. Here's what works:
So looks like all LLaMA based "old" models cannot be loaded with the PyPI gpt4all v0.3.0.
I just tried to ggml-gpt4all-l13b-snoozy.bin
and the error led me here. So you can check that one as well.
I'm running gpt4-all 0.3.0
on Ubuntu 22.04
with Jupyter Notebook using ipykernel 6.23.1
, ipython 8.11.0
and python 3.10.6
. Downgrading to 0.2.3 solved this issue.
I am on a Mac (Intel processor). I'm curious, what is old and new version? thanks
I am on a Mac (Intel processor). I'm curious, what is old and new version? thanks
I'm just calling it that. The background is:
llama.cpp
project.llama.cpp
this project relies on.The models you see in the 'Downloads' dialog of the chat application are almost all in the old format.
I am on a Mac (Intel processor). I'm curious, what is old and new version? thanks
I'm just calling it that. The background is:
- GPT4All depends on the
llama.cpp
project.- There were breaking changes to the model format in the past.
- The GPT4All devs first reacted by pinning/freezing the version of
llama.cpp
this project relies on.- With the recent release, it now includes multiple versions of said project, and therefore is able to deal with new versions of the format, too.
The models you see in the 'Downloads' dialog of the chat application are almost all in the old format.
If all of them are in old format then which version of different packages will give right output?
and how to make it run with latest version of gpt4all and pyllamacpp.
If all of them are in old format then which version of different packages will give right output? and how to make it run with latest version of gpt4all and pyllamacpp.
Not sure if I understand you right. v0.3.0 of the Python package technically supports models both in "old" and new versions of the format, but seems to have some bugs. v0.2.3 and below only supports "old" versions. All the models listed in my previous comment are in the old format.
If all of them are in old format then which version of different packages will give right output? and how to make it run with latest version of gpt4all and pyllamacpp.
Not sure if I understand you right. v0.3.0 of the Python package technically supports models both in "old" and new versions of the format, but seems to have some bugs. v0.2.3 and below only supports "old" versions. All the models listed in my previous comment are in the old format.
Yes, you may be right. But when i use GPT4all with langchain and pyllamacpp packages on ggml-gpt4all-j-v1.3-groovy.bin model. I got strange response from the model. Instead of generate the response from the context, it start generating the random text such as
";&A319$+B%,1'4#";FH$9B,$H=9)F*37@(.!;0$7+6%'D!<+D*4;&;'(=:.5217(*G@'8(2!;F4F9+'%,E&C,38D,B!@)G9*A=2D629E+A@9#@%3#B*#)3;86=5B;7!C0B1*'C@(-!='8!:510:F3)1198H;4=E;.E<8B4""$'342H<(3$'484671)28%E+1#..;77!C-7HH!8,G8"-'$9<!-8#HD)1,=$#6H;BD0$E)%"($:%GH!2'2&<"
When i was using GPT4ALL=0.2.3 and pyllamacpp=1.0.6 version, the same model able to generate the text.
Can you tell me which mistakes i am doing?
See my response in #902. We should probably move the conversation about your problem there anyway, it's not the same as this one.
Please check if the checksum matches, garbled output like this had been result of a defective harddrive for me before.
same here,
When using Groovy - everything's fine. When trying Snoozy or Nous Hermes, get this type of error:
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gpt4all/gpt4all.py", line 41, in init self.model.load_model(model_dest) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/gpt4all/pyllmodel.py", line 152, in load_model raise ValueError("Unable to instantiate model") ValueError: Unable to instantiate model
Downgrade to 0.2.3 solved my problem, but ggml-v3-13b-hermes-q5_1.bin still won't load.
v0.2.3 is not compatible with models of the "v3" version of ggml.
It's very odd. Like you, I'm still getting the error on my Mint test VM (based on Ubuntu 22.04) with gpt4all==0.3.2
, but building it manually from current main
branch makes it work. Not sure what exactly went wrong with the PyPI package.
I manually built gpt4all and it works on ggml-gpt4all-l13b-snoozy.bin
, but on ggml-v3-13b-hermes-q5_1.bin
it gives this after the second chat_completion
:
llama_eval_internal: first token must be BOS
llama_eval: failed to eval
LLaMA ERROR: Failed to process prompt
I manually built gpt4all and it works on
ggml-gpt4all-l13b-snoozy.bin
, but onggml-v3-13b-hermes-q5_1.bin
it gives this after the secondchat_completion
:llama_eval_internal: first token must be BOS llama_eval: failed to eval LLaMA ERROR: Failed to process prompt
That's even more curious. I don't have hermes here, yet. I might try to reproduce that later.
Are you certain you can rule out an 'out of RAM' error?
I have 64 GB of RAM
Right, I've downloaded the model now and can reproduce this exactly, getting the same error. Curiously though, the chat application itself (on Windows and Linux) has no trouble with that model.
Although that is to say, v2.4.6 of the chat application is not exactly the same as where the main
branch is now .
How can I get this ggml model after training gpt4all model?
Hey I am also not able to run. My system is a RHEL 8 based server with 32 cores.
1 validation error for GPT4All _root_ Unable to instantiate model (type=value_error)
althout the code was working with 0.2.3 This is just so much confusing
0.3.3
seems to fix some stuff. Even hermes and replit model work for me.
have you tried talking to hermes more than one time? if so -- and it works -- on what system are you?
but yes, other things got fixed.
By the way, @olegpokhilchenko try v0.3.4 from PyPI.
Hey I am also not able to run. My system is a RHEL 8 based server with 32 cores.
1 validation error for GPT4All _root_ Unable to instantiate model (type=value_error)
althout the code was working with 0.2.3 This is just so much confusing
Still facing the issue regarding this !
You have a different problem. See my comment in the issue you opened yourself.
I have just installed v0.3.4 and am attempting to load the hermes
model. I see the same error as the others have reported in this post:
Unable to instantiate model (type=value_error)
I'm familiar with that line by now:
Unable to instantiate model (type=value_error)
That's coming from Langchain. Try this:
Only when that works install Langchain again.
Thanks for your help, @cosmic-snow. I created a new conda environment and built the gpt4all
package rather than downloading it from PyPI. When following the example in the link above, the line gptj = GPT4All("ggml-gpt4all-j-v1.3-groovy")
throws the following error:
>>> gptj = GPT4All("ggml-gpt4all-j-v1.3-groovy")
Found model file at /Users/my_name/.cache/gpt4all/ggml-gpt4all-j-v1.3-groovy.bin
Invalid model file
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/my_name/opt/miniconda3/envs/llm-dl-learn/lib/python3.11/site-packages/gpt4all/gpt4all.py", line 41, in __init__
self.model.load_model(model_dest)
File "/Users/my_name/opt/miniconda3/envs/llm-dl-learn/lib/python3.11/site-packages/gpt4all/pyllmodel.py", line 153, in load_model
raise ValueError("Unable to instantiate model")
ValueError: Unable to instantiate model
The only workaround I've found is downgrading to version 0.2.3
, as others on this thread have mentioned. Unfortunately this precludes usage of the v3 models.
What version did you build yourself and what system are you on? v0.3.4 from PyPI should've fixed the problems that came up in this issue: LLaMA models and hermes.
But you're even having trouble with the basic groovy
, so that doesn't look good.
Also, looks like you're on a Mac? I can't even troubleshoot that, I don't have one.
I'm getting same error as greenguy33 when trying the hermes model. I'm getting this error on both windows and linux. I've tried versions 0.2.3, 0.3.3, and 0.3.4. Nothing seems to work for hermes. Groovy works fine for me.
import gpt4all gptj = gpt4all.GPT4All("ggml-v3-13b-hermes-q5_1") Found model file at C:\\Users\\myname\\.cache\\gpt4all\ggml-v3-13b-hermes-q5_1.bin Invalid model file Traceback (most recent call last): File "
", line 1, in File "C:\ProgramData\Anaconda3\lib\site-packages\gpt4all\gpt4all.py", line 41, in init self.model.load_model(model_dest) File "C:\ProgramData\Anaconda3\lib\site-packages\gpt4all\pyllmodel.py", line 153, in load_model raise ValueError("Unable to instantiate model") ValueError: Unable to instantiate model
And you guys are sure the model checksums are correct and you have enough RAM to run them?
Because I tested hermes
here on both Windows and Linux with v0.3.4 and it worked both times. In fact, the problem with hermes
wasn't even that it wouldn't load (v0.3.3 fixed that), but that you couldn't talk to it more than once (v0.3.4 fixed that).
You should really include details about your OS / distro / version, RAM, maybe even CPU. Because these problems are not the same as the initial errors we had in this issue.
Edit: Oh and while we're at it: also Python version, how you installed it and whether you use a virtual environment. Maybe there's a hint somewhere if you can provide all that extra info.
I was using Windows 10 version 22H2 os build 19045.3086. 64GB ram. Intel(R) Core(TM) i9-10900F CPU @ 2.80GHz 2.81 GHz.
I also have the GUI GPT4All chat.exe installed on the same computer and hermes runs fine on that.
My python version is 3.8.5. I installed it directly from the python website. I'm not using a virtual environment.
It does seem that the checksum is off. My groovy matches, but I'm getting bb27febb3cbcd4e91638c2787e4f5b8b instead of f26b99c320ff358f4223a973217eb31e for hermes.
Ok, I figured it out. The hermes file was only 27KB. I deleted the file and then reran the python code under gpt4all 0.3.4 and it worked. I then deleted the file and downgraded to 0.3.0 and I was able to recreate the 27KB file. I must have downloaded hermes before the 0.3.3 fix and then it kept the file and tried to use it when I upgraded to 0.3.4. Deleting the old file first seems to do the trick.
I'm trying to run it on: LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: RedHatEnterprise Description: Red Hat Enterprise Linux release 8.7 (Ootpa) Release: 8.7 Codename: Ootpa
I created a new venv and built from source using these instructions: https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-bindings/python/README.md
But when I run it, I get a segmentation fault:
python test_model_gpt4all.py
Found model file at ./models/ggml-gpt4all-j-v1.3-groovy.bin
Segmentation fault
I tried it with:
I tried 0.3.4, 0.3.3, 0.3.0 I get:
Invalid model file
When I try 0.2.3 I get
OSError: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by /venv/lib/python3.8/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllama.so)
Let me know if anyone knows how to get GLIBC_2.29. I'm not sure if I would need to be a super user to install it or what to install.
Let me know if anyone knows how to get GLIBC_2.29. I'm not sure if I would need to be a super user to install it or what to install.
glibc
is the GNU C library and a fundamental part of the system. Unless you know what you're doing, don't mess with that. There are ways around it, but rebuilding yourself so that it sits on top of your own glibc
is the obvious one. The error means that it was compiled for a glibc
that's newer than yours. Not surprising, it looks like the base system they use for compilation is Ubuntu 22.04.
I've seen someone else produce a segmentation fault somehow after building it on a RedHat 8, see: https://github.com/nomic-ai/gpt4all/issues/971#issuecomment-1590661079. But I can't really troubleshoot that at the moment, I don't have that or a related system at hand right now.
If you don't have another system and can't resolve the segmentation fault somehow, your next best bet would probably be to try it in some kind of container.
I upgraded to 0.3.5
, and Nous Hermes now fully works!
Let me know if anyone knows how to get GLIBC_2.29. I'm not sure if I would need to be a super user to install it or what to install.
glibc
is the GNU C library and a fundamental part of the system. Unless you know what you're doing, don't mess with that. There are ways around it, but rebuilding yourself so that it sits on top of your ownglibc
is the obvious one. The error means that it was compiled for aglibc
that's newer than yours. Not surprising, it looks like the base system they use for compilation is Ubuntu 22.04.I've seen someone else produce a segmentation fault somehow after building it on a RedHat 8, see: #971 (comment). But I can't really troubleshoot that at the moment, I don't have that or a related system at hand right now.
If you don't have another system and can't resolve the segmentation fault somehow, your next best bet would probably be to try it in some kind of container.
Using ubuntu:latest
works for me. Just need to install python3.11
(and also -venv
) and then I can run through the test code
from gpt4all import GPT4All
gptj = GPT4All("ggml-gpt4all-j-v1.3-groovy")
If you also want to build llama.cpp
, feel free to install gcc-11
, g++-11
, and so on, which are all available. This has struggled me for the whole day. Everything is so fine within a container 🙃
I haven't test further but at least the base model can be properly loaded
I had this error on 0.3.2 with Nous Hermes, but I can confirm that upgrading to latest version (0.3.6) fixed the problem.
Ubuntu 20.4 Ram 16GB.
This still seems broken on Rocky 8.8. The same downloaded .bin file works on Windows. I've tried a bunch of different models and they all suffer the same issue in Rocky 8.
ValueError: Unable to instantiate model
I haven't been able to get a model to load.
# cat /etc/redhat-release
Rocky Linux release 8.8 (Green Obsidian)
root@test75:/home/rocky # python3.11 -m venv test
root@test75:/home/rocky # source test/bin/activate
(test) root@test75:/home/rocky # pip install gpt4all
Collecting gpt4all
Using cached gpt4all-1.0.2-py3-none-manylinux1_x86_64.whl (3.1 MB)
Collecting requests
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting tqdm
Using cached tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting charset-normalizer<4,>=2
Using cached charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (197 kB)
Collecting idna<4,>=2.5
Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting urllib3<3,>=1.21.1
Using cached urllib3-2.0.3-py3-none-any.whl (123 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2023.5.7-py3-none-any.whl (156 kB)
Installing collected packages: urllib3, tqdm, idna, charset-normalizer, certifi, requests, gpt4all
Successfully installed certifi-2023.5.7 charset-normalizer-3.1.0 gpt4all-1.0.2 idna-3.4 requests-2.31.0 tqdm-4.65.0 urllib3-2.0.3
(test) root@test75:/home/rocky # python
Python 3.11.2 (main, Jun 22 2023, 04:35:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from gpt4all import GPT4All
>>> model = GPT4All("/home/rocky/nous-hermes-13b.ggmlv3.q4_0.bin")
Found model file at /home/rocky/nous-hermes-13b.ggmlv3.q4_0.bin
Invalid model file
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/rocky/test/lib64/python3.11/site-packages/gpt4all/gpt4all.py", line 48, in __init__
self.model.load_model(model_dest)
File "/home/rocky/test/lib64/python3.11/site-packages/gpt4all/pyllmodel.py", line 177, in load_model
raise ValueError("Unable to instantiate model")
ValueError: Unable to instantiate model
Notes: I am using the libraries from anaconda3 as the ones included with the OS are too old.
# export LD_LIBRARY_PATH=/root/anaconda3/lib/
Like an earlier post, I also get a segmentation fault when running the above after following the build instructions and building the python bindings on Rocky 8 (and pip installing them).
Any help appreciated!
As others mentioned, inspect the checksum.
In a conda prompt on windows this can be done with:
certutil -hashfile '.\ggml-gpt4all-j-v1.3-groovy.bin' MD5
Don't know why, but I had some .bin files with altered checksums after uploading them to a different device.
In my case the .bin files were downloaded separately from source. md5sum looks OK and matches that in https://raw.githubusercontent.com/nomic-ai/gpt4all/main/gpt4all-chat/metadata/models.json
"md5sum": "4acc146dd43eb02845c233c29289c7c5"
[rocky@test75~]$ md5sum /home/rocky/nous-hermes-13b.ggmlv3.q4_0.bin
4acc146dd43eb02845c233c29289c7c5 /home/rocky/nous-hermes-13b.ggmlv3.q4_0.bin
Your installation output shows everything comes from the cache. What happens if you pip uninstall or start with a new environment, clear the cache and pip install once again (with --no-cache-dir to be sure)? If this suggests something is wrong referring to wheels or compiling, you could check wether you have a C++ compiler and if it's accesible through the PATH system variable. Can be verified with 'g++ --version', but maybe Linux has other variations.
Thanks for the input. I built a new venv and ensured all wheels where downloaded fresh. The result is still the same.
root@test75:/home/rocky # python3.11 -m venv test2
root@test75:/home/rocky # source test2/bin/activate
(test2) root@test75:/home/rocky # pip --no-cache-dir install gpt4all
Collecting gpt4all
Downloading gpt4all-1.0.2-py3-none-manylinux1_x86_64.whl (3.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.1/3.1 MB 64.6 MB/s eta 0:00:00
Collecting requests
Downloading requests-2.31.0-py3-none-any.whl (62 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.6/62.6 kB 382.8 MB/s eta 0:00:00
Collecting tqdm
Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 77.1/77.1 kB 360.3 MB/s eta 0:00:00
Collecting charset-normalizer<4,>=2
Downloading charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (197 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 197.3/197.3 kB 397.6 MB/s eta 0:00:00
Collecting idna<4,>=2.5
Downloading idna-3.4-py3-none-any.whl (61 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.5/61.5 kB 362.1 MB/s eta 0:00:00
Collecting urllib3<3,>=1.21.1
Downloading urllib3-2.0.3-py3-none-any.whl (123 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 123.6/123.6 kB 402.9 MB/s eta 0:00:00
Collecting certifi>=2017.4.17
Downloading certifi-2023.5.7-py3-none-any.whl (156 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 157.0/157.0 kB 392.5 MB/s eta 0:00:00
Installing collected packages: urllib3, tqdm, idna, charset-normalizer, certifi, requests, gpt4all
Successfully installed certifi-2023.5.7 charset-normalizer-3.1.0 gpt4all-1.0.2 idna-3.4 requests-2.31.0 tqdm-4.65.0 urllib3-2.0.3
[notice] A new release of pip available: 22.3.1 -> 23.1.2
[notice] To update, run: pip install --upgrade pip
(test2) root@test75:/home/rocky # export LD_LIBRARY_PATH=/root/anaconda3/lib/
(test2) root@test75:/home/rocky # python
Python 3.11.2 (main, Jun 22 2023, 04:35:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from gpt4all import GPT4All
>>> model = GPT4All("/home/rocky/nous-hermes-13b.ggmlv3.q4_0.bin")
Found model file at /home/rocky/nous-hermes-13b.ggmlv3.q4_0.bin
Invalid model file
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/rocky/test2/lib64/python3.11/site-packages/gpt4all/gpt4all.py", line 48, in __init__
self.model.load_model(model_dest)
File "/home/rocky/test2/lib64/python3.11/site-packages/gpt4all/pyllmodel.py", line 177, in load_model
raise ValueError("Unable to instantiate model")
ValueError: Unable to instantiate model
Yes there is a C++ compiler installed and accessible through PATH environment variable.
(test2) root@test75:/home/rocky # g++ --version
g++ (GCC) 8.5.0 20210514 (Red Hat 8.5.0-18)
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
(test2) root@test75:/home/rocky # which g++
/bin/g++
@nsladen Clearing the pip cache apparently does not remove the bindings for C++. These are stored in a system or environment location, couldn't find it. From what I gathered pip also doesn't update them. Perhaps that's why there are recommendations around to build from source using cmake and a compiler. Earlier you mentioned building bindings. Did you follow this article to build gpt4all from source then? If not it might be worth a shot, seems straightforward on Linux. Don't know what this entails:
I still suspect the error's cause can be sought in the direction of python to C++ bindings. The FAQ on the documentation site, as well as this Readme under the gpt4all-backend folder in this repo mentions there was a compatibility change for llama.cpp affecting gpt4all models. So, if you have an incompatible binding around that is still cached somewhere, maybe you could try to get a different version of that.
You can also take your .bin file upstream with
That's my best shot at this now. Don't know more about bindings and build processes yet. Probably it's time to learn Docker first.
@nsladen Clearing the pip cache apparently does not remove the bindings for C++.
The native libraries are part of the package unless you build yourself.
I still suspect the error's cause can be sought in the direction of python to C++ bindings. The FAQ on the documentation site, as well as this Readme under the gpt4all-backend folder in this repo mentions there was a compatibility change for llama.cpp affecting gpt4all models. So, if you have an incompatible binding around that is still cached somewhere, maybe you could try to get a different version of that.
That's different. Upstream llama.cpp broke format compatibility (the .bin
files format). As a response, GPT4All provides several libraries for current and older formats.
You can also take your .bin file upstream with
- https://github.com/nomic-ai/pygpt4all/tree/main/pyllamacpp It has instructions to recompile llama.cpp, a chat function, and tools to convert old model files to the new.
- https://github.com/ggerganov/llama.cpp More information on familiar topics such as compatibility, compiling, checksums.
That's my best shot at this now. Don't know more about bindings and build processes yet. Probably it's time to learn Docker first.
Yes, Docker is probably the answer -- as mentioned before (or in a related issue). If you have a closer look, the GCC there is version 8 something. This project requires the C++ 20 standard for some things. To me, it's not the biggest surprise that a custom build is producing a segfault somewhere. For comparison, my GCC in MSYS2:
$ gcc --version
gcc.exe (Rev7, Built by MSYS2 project) 13.1.0
anyone pls reply me with correct solution for below error raise ValueError("Unable to instantiate model") ValueError: Unable to instantiate model
anyone pls reply me with correct solution for below error raise ValueError("Unable to instantiate model") ValueError: Unable to instantiate model
You need to have enough Ram and the latest Gpt4All version..
Anyone having the same problem if upgrade to latest version (gpt4all 1.0.6)? I checked my md5 checksum of the model and it matches the official checksum. Should I downgrade to 0.3.4?
from gpt4all import GPT4All
model = GPT4All("ggml-gpt4all-j-v1.3-groovy.bin", model_path="./local_models")
Found model file at ./local_models/ggml-gpt4all-j-v1.3-groovy.bin
Invalid model file
Traceback (most recent call last):
...
ValueError: Unable to instantiate model
Anyone having the same problem if upgrade to latest version (gpt4all 1.0.6)? I checked my md5 checksum of the model and it matches the official checksum. Should I downgrade to 0.3.4?
from gpt4all import GPT4All model = GPT4All("ggml-gpt4all-j-v1.3-groovy.bin", model_path="./local_models") Found model file at ./local_models/ggml-gpt4all-j-v1.3-groovy.bin Invalid model file Traceback (most recent call last): ... ValueError: Unable to instantiate model
I'd say try with v1.0.5 first. What OS or distro and version are you using?
Issue you'd like to raise.
I am trying to follow the basic python example. I have downloaded the model
.bin
file as well from gpt4all.io: https://gpt4all.io/models/ggml-vicuna-13b-1.1-q4_2.binHowever it fails
Any help will be much appreciated.
Suggestion:
No response