Closed kabelklaus closed 2 months ago
I have used ollama to get the model, using the command line "ollama pull llama3"
In the settings-ollama.yaml, I have changed the line
llm_model: mistral
to
llm_model: llama3 # mistral
After restarting private gpt, I get the model displayed in the ui.
I have used ollama to get the model, using the command line "ollama pull llama3" In the settings-ollama.yaml, I have changed the line
llm_model: mistral
tollm_model: llama3 # mistral
After restarting private gpt, I get the model displayed in the ui.
Apology to ask. Seems like you are hinting which you get the model displayed in the UI. but not actually working? Or have i overinterpretated the statemenet
Remember that if you decide to use another LLM model in ollama, you have to pull before.
ollama pull llama3
After downloading, be sure that Ollama is working as expected. You can check this using this example cURL:
curl -X POST http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt":"Why is the sky blue?"
}'
How is it possible to use Llama3 instead of mistral for privatgpt