nomic-ai / gpt4all

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
https://nomic.ai/gpt4all
MIT License
70.59k stars 7.7k forks source link

Cannot get gpt4all Python Bindings to install or run properly on Windows 11, Python 3.9. #717

Closed gavtography closed 1 year ago

gavtography commented 1 year ago

I'm a complete beginner, so apologies if I'm missing something obvious. I'm trying to follow the README here https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-bindings/python/README.md

I install gpt4all with pip perfectly fine. Then I installed the Cygwin64 Terminal and ran the lines in the tutorial. Everything goes well until "cmake --build . --parallel". This is what I get:

$ cmake --build . --parallel
MSBuild version 17.5.1+f6fdcf537 for .NET Framework

Checking Build System
ggml.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\llama.cp
p\ggml.dir\Debug\ggml.lib
Auto build dll exports
llama.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Deb
ug\llama.dll
common.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\llama.
cpp\examples\common.dir\Debug\common.lib
Building Custom Rule C:/cygwin64/home/USER/gpt4all/gpt4all-backend/CMakeList
s.txt
quantize-stats.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\buil
d\bin\Debug\quantize-stats.exe
main.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Debu
g\main.exe
Microsoft (R) C/C++ Optimizer Version 19.35.32217.1 for x64
gptj.cpp
Copyright (C) Microsoft Corporation. All rights reserved.
cl /c /I"C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build" /I"C:\cygwin64
\home\USER\gpt4all\gpt4all-backend\llama.cpp." /Zi /W1 /WX- /diagnostics:co
lumn /Od /Ob0 /D _WINDLL /D _MBCS /D WIN32 /D _WINDOWS /D "CMAKE_INTDIR="Deb
ug"" /D llmodel_EXPORTS /Gm- /EHsc /RTC1 /MDd /GS /fp:precise /Zc:wchar_t /Z
c:forScope /Zc:inline /GR /Fo"llmodel.dir\Debug" /Fd"llmodel.dir\Debug\vc14
3.pdb" /external:W1 /Gd /TP /errorReport:queue "C:\cygwin64\home\USER\gpt4al
l\gpt4all-backend\gptj.cpp" "C:\cygwin64\home\USER\gpt4all\gpt4all-backend\m
pt.cpp"
save-load-state.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Debug\save-load-state.exe
vdot.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Debu
g\vdot.exe
embedding.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin
\Debug\embedding.exe
q8dot.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin\Deb
ug\q8dot.exe
perplexity.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bi
n\Debug\perplexity.exe
quantize.vcxproj -> C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\bin
Debug\quantize.exe
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(280,13): error C7555: u
se of designated initializers requires at least '/std:c++20' [C:\cygwin64\home
\USER\gpt4all\gpt4all-backend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): warning C4477:
'fprintf' : format string '%lu' requires an argument of type 'unsigned long',
but variadic argument 3 has type 'int64_t' [C:\cygwin64\home\USER\gpt4all\gpt4
all-backend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): message : cons
ider using '%llu' in the format string [C:\cygwin64\home\USER\gpt4all\gpt4all-
backend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): message : cons
ider using '%Iu' in the format string [C:\cygwin64\home\USER\gpt4all\gpt4all-b
ackend\build\llmodel.vcxproj]
C:\cygwin64\home\USER\gpt4all\gpt4all-backend\gptj.cpp(418,33): message : cons
ider using '%I64u' in the format string [C:\cygwin64\home\USER\gpt4all\gpt4all-backend\build\llmodel.vcxproj]

From what I can see, some type of error is happening. Especially because "libllmodel.*" does not exist in "gpt4all-backend/build".

If I continue the tutorial anyway and try to run the python code. Then "pyllmodel.py" opens on Visual Studio and I get the following error:

Exception has occurred: FileNotFoundError
Could not find module 'c:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\llmodel_DO_NOT_MODIFY\build\libllama.dll' (or one of its dependencies). Try using the full path with constructor syntax.
  File "C:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 49, in load_llmodel_library
    llama_lib = ctypes.CDLL(llama_dir, mode=ctypes.RTLD_GLOBAL)
  File "C:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 55, in <module>
    llmodel, llama = load_llmodel_library()
  File "C:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\__init__.py", line 1, in <module>
    from .pyllmodel import LLModel # noqa
  File "C:\Users\USER\Desktop\bigmantest.py", line 1, in <module>
    from gpt4all import GPT4All
FileNotFoundError: Could not find module 'c:\cygwin64\home\USER\gpt4all\gpt4all-bindings\python\gpt4all\llmodel_DO_NOT_MODIFY\build\libllama.dll' (or one of its dependencies). Try using the full path with constructor syntax.

Not sure if it's a bug, or I'm missing something, but any help would be appreciated. Reminder that I'm a beginner, so hoping for not too much technical jargon that might be difficult for me to understand. Thanks!

Information

Related Components

Reproduction

Simply following the README on Windows 11, Python 3.9. Nothing special.

Expected behavior

For the example python script to successfully output the result to "Name 3 colors" after downloading "ggml-gpt4all-j-v1.3-groovy".

gavtography commented 1 year ago

I see, interesting, I understand now. I appreciate the help once again!

gavtography commented 1 year ago

@cosmic-snow Apologies for the ping, as well as digging up an old solved issue post. But I'm just following up on your last reply about the default prompt header, I think I need a bit more clarification on how the default prompt header works. Is your explanation here still relevant or has this changed?

I'm having trouble figuring out what exactly to modify in the GPT4All.py in order to override the default prompt header and replace it with my own. I understand that changes are coming to this, but I'm not sure how soon that'll be, and regardless I don't believe I'm positive about how the new system would work either. A simplified explanation would be appreciated. Thank you for your time!

cosmic-snow commented 1 year ago

Is your explanation here still relevant or has this changed?

Not relevant anymore, I think. Things have changed, and as soon as #1145 gets merged and the next release happens, things will change even more.

Currently, you may want to either subclass GPT4All or just monkey patch things to your liking, I guess.

gavtography commented 1 year ago

Currently, you may want to either subclass GPT4All or just monkey patch things to your liking, I guess.

Yeah thought about doing this, was hoping for a better solution because I worry about the efficiency/effectiveness.

Would it be a better idea for me to wait out the merge? is that how soon we're talking?

cosmic-snow commented 1 year ago

Would it be a better idea for me to wait out the merge? is that how soon we're talking?

Merge is likely as soon as tomorrow. Not sure about the release yet.

Adaverse commented 1 year ago

Followed the discussion and installed gpt4all. But when running below snippet -

model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin", device='gpu')

Getting below error -

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\gpt4all\gpt4all-bindings\python\gpt4all\gpt4all.py", line 97, in __init__
    self.model.init_gpu(model_path=self.config["path"], device=device)
  File "C:\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 237, in init_gpu
    available_gpus = [device.name.decode('utf-8') for device in self.list_gpu(model_path)]
  File "C:\gpt4all\gpt4all-bindings\python\gpt4all\pyllmodel.py", line 216, in list_gpu
    raise ValueError("Unable to retrieve available GPU devices")
ValueError: Unable to retrieve available GPU devices

Any idea on how to make it work with GPU?