Open brankoradovanovic-mcom opened 1 week ago
Happens in this line of gpt4all.py:
self.model = LLModel(self.config["path"], n_ctx, ngl, backend)
So, it's the backend code apparently.
If device
is set to "cpu", backend
is set to "kompute". But then again, if device
is set to "kompute", backend
is also set to "kompute". However, device
="kompute", implies the use of GPU, while device
="cpu" does not. Anyway, just thinking out loud here...
Hi man, i faced with this problem too, and find out wheres problem. These dlls must be placed in your venv folder gpt4all\llmodel_DO_NOT_MODIFY\build. And you can find these dlls in the main folder of application\lib
English is not my main lang, sorry for mistakes.
Hi man, i faced with this problem too, and find out wheres problem. These dlls must be placed in your venv folder gpt4all\llmodel_DO_NOT_MODIFY\build. And you can find these dlls in the main folder of application\lib
English is not my main lang, sorry for mistakes.
@RevengerNick , I tried copying the DLLs. But now the error changed to
RuntimeError: Unable to instantiate model: Could not find any implementations for build variant: kompute Did you also encounter this problem?
Hmm, in this case, idk whats the problem, maybe you have old CPU? CPU must support AVX instructions. Your code in my PC work with no problem, so check the AVX.
Hi man, i faced with this problem too, and find out wheres problem. These dlls must be placed in your venv folder gpt4all\llmodel_DO_NOT_MODIFY\build. And you can find these dlls in the main folder of application\lib
I've taken a look, but the folder above already contains these DLLs (on both my machines). Kompute DLLs are also there. What I'm getting at is that the issue here is not the error itself, but the fact these DLLs are loaded in the first place - it's as if the code goes down the wrong branch. This is just guesswork though, I can't pretend to understand the underlying logic.
I don't think it's selective in the logic to load these libraries, I haven't looked at that logic in a while, however. Of course, all of them need to be present in a publicly available package, because different people have different configurations and needs.
Selecting the right loaded library then happens depending on what you ask with the device
argument.
I'm not sure if there's a straight-forward way to suppress these messages with the PyPI package. You could build your own, however, and exclude them from the build. Or simply remove the DLLs (probably, not tested).
For the other two, sounds like you're talking about a different problem than what is described in the issue?
Bug Report
Whichever Python script I run, when calling the GPT4All() constructor, say like this:
model = GPT4All(model_name='openchat-3.6-8b-20240522-Q5_K_M.gguf', allow_download=False, device='cpu')
...I get the following error messages:
After that, the script continues to run normally, but these spurious error messages are annoying, particularly since:
This did not happen in the earlier versions. I suspect it might be due to upstream changes in llama.cpp, but I'm not sure.
Your Environment