Closed gavtography closed 1 year ago
I see, interesting, I understand now. I appreciate the help once again!
@cosmic-snow Apologies for the ping, as well as digging up an old solved issue post. But I'm just following up on your last reply about the default prompt header, I think I need a bit more clarification on how the default prompt header works. Is your explanation here still relevant or has this changed?
I'm having trouble figuring out what exactly to modify in the GPT4All.py in order to override the default prompt header and replace it with my own. I understand that changes are coming to this, but I'm not sure how soon that'll be, and regardless I don't believe I'm positive about how the new system would work either. A simplified explanation would be appreciated. Thank you for your time!
Is your explanation here still relevant or has this changed?
Not relevant anymore, I think. Things have changed, and as soon as #1145 gets merged and the next release happens, things will change even more.
Currently, you may want to either subclass GPT4All or just monkey patch things to your liking, I guess.
Currently, you may want to either subclass GPT4All or just monkey patch things to your liking, I guess.
Yeah thought about doing this, was hoping for a better solution because I worry about the efficiency/effectiveness.
Would it be a better idea for me to wait out the merge? is that how soon we're talking?
Would it be a better idea for me to wait out the merge? is that how soon we're talking?
Merge is likely as soon as tomorrow. Not sure about the release yet.
Followed the discussion and installed gpt4all. But when running below snippet -
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin", device='gpu')
Getting below error -
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\gpt4all\gpt4all-bindings\python\gpt4all\gpt4all.py", line 97, in __init__
self.model.init_gpu(model_path=self.config["path"], device=device)
File "C:\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 237, in init_gpu
available_gpus = [device.name.decode('utf-8') for device in self.list_gpu(model_path)]
File "C:\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 216, in list_gpu
raise ValueError("Unable to retrieve available GPU devices")
ValueError: Unable to retrieve available GPU devices
Any idea on how to make it work with GPU?
I'm a complete beginner, so apologies if I'm missing something obvious. I'm trying to follow the README here https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-bindings/python/README.md
I install gpt4all with pip perfectly fine. Then I installed the Cygwin64 Terminal and ran the lines in the tutorial. Everything goes well until "cmake --build . --parallel". This is what I get:
From what I can see, some type of error is happening. Especially because "libllmodel.*" does not exist in "gpt4all-backend/build".
If I continue the tutorial anyway and try to run the python code. Then "pyllmodel.py" opens on Visual Studio and I get the following error:
Not sure if it's a bug, or I'm missing something, but any help would be appreciated. Reminder that I'm a beginner, so hoping for not too much technical jargon that might be difficult for me to understand. Thanks!
Information
Related Components
Reproduction
Simply following the README on Windows 11, Python 3.9. Nothing special.
Expected behavior
For the example python script to successfully output the result to "Name 3 colors" after downloading "ggml-gpt4all-j-v1.3-groovy".