Open dpierson0721 opened 5 months ago
I am also experiencing the same issue.
I have uncommented the line gpt4all==2.0.2 in default.txt and copied the same settings that are indicated for gpt4all. i.e. -
GEN_AI_MODEL_PROVIDER=gpt4all GEN_AI_MODEL_VERSION=mistral-7b-openorca.gguf2.Q4_0.gguf
QA_TIMEOUT=120 # Set a longer timeout, running models on CPU can be slow
DISABLE_LLM_CHOOSE_SEARCH=True
DISABLE_LLM_CHUNK_FILTER=True
DISABLE_LLM_QUERY_REPHRASE=True
DISABLE_LLM_FILTER_EXTRACTION=True
QA_PROMPT_OVERRIDE=weak
I have installed gpt4all on ubuntu 22.04 and it is fully upgraded. I installed docker and downloaded mistral-7b-openorca.gguf2.Q4_0.gguf to directory /home/aiadmin/.local/share/nomic.ai/GPT4All/
I am not sure what I am missing to run this model locally. Any assistance would be appreciated.
I will say this, I gave up on gpt4all and started using Ollama, it's the same thing really but has official LLM support. I've been pleased with the performance so far using Ollama instead.
This issue is stale because it has been open 75 days with no activity. Remove stale label or comment or this will be closed in 15 days.
I have enabled gpt4all using env variables but I still get the window to configure an OpenAI API key (or custom).
I'm using the dev version because I want it running on the localhost only.
Is there something else I need to do other than go into the docker-compose.dev.yml file and input all of the suggested settings from the docs?