-
https://github.com/nomic-ai/llama.cpp
GPT4All runs Mistral and Mixtral q4 models over 10x faster on my 6600M GPU
-
### Bug Report
There was a noticeable slowdown of doing inference on LLMs. Something like 30-40% less tokens / second.
This change affected both CPU, Cuda and Vulkan backends.
This regression…
-
**Project description**
We've got gpt4all packaged few months a go, but we're missing python bindings for it which could be used eg. here https://github.com/NixOS/nixpkgs/pull/290501
**Metadat…
-
## Expected Behavior
Please describe the behavior you are expecting.
Use gpt4_all successfully
## Current Behavior
Please describe the behavior you are currently experiencing.
An error occurred w…
-
Hello and thanks for this great repository.
It would be nice to have a field at the CodeGPT settings, to input the URL / IP number and port number of Ollama / GPT4All installation.
Best,
Orkut
-
When I execute
`model = GPT4All("Meta-Llama-3-8B-Instruct.Q4_0.gguf")` as instructed in the README,
I get the following error
```
urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] cert…
-
In a multi-turn conversation I see that the combination of llama-cpp-python and llama-cpp-agent is much slower on the second prompt than the python bindings of gpt4all. See the 2 screenshots below. Th…
-
Hi guys,
I have been testing gpt4all using the assistant on windows. It is working fine. However, when I call the same sequence
role user
role assistant
role user
role assistant
role use
…
-
The GPT4All program crashes every time I attempt to load a model. My laptop should have the necessary specs to handle the models, so I believe there might be a bug or compatibility issue.
Steps to …
-
How would I convert this into the ggml format?
https://huggingface.co/andreaskoepf/pythia-2.8b-gpt4all-pretrain/tree/main