zylon-ai / private-gpt

Interact with your documents using the power of GPT, 100% privately, no data leaks
https://privategpt.dev
Apache License 2.0
53.79k stars 7.22k forks source link

Use Llama3 for PrivateGpt #1885

Closed kabelklaus closed 2 months ago

kabelklaus commented 5 months ago

How is it possible to use Llama3 instead of mistral for privatgpt

dlorenz70 commented 5 months ago

I have used ollama to get the model, using the command line "ollama pull llama3" In the settings-ollama.yaml, I have changed the line llm_model: mistral to llm_model: llama3 # mistral

After restarting private gpt, I get the model displayed in the ui.

skyworld2147 commented 4 months ago

I have used ollama to get the model, using the command line "ollama pull llama3" In the settings-ollama.yaml, I have changed the line llm_model: mistral to llm_model: llama3 # mistral

After restarting private gpt, I get the model displayed in the ui.

Apology to ask. Seems like you are hinting which you get the model displayed in the UI. but not actually working? Or have i overinterpretated the statemenet

jaluma commented 2 months ago

Remember that if you decide to use another LLM model in ollama, you have to pull before. ollama pull llama3 After downloading, be sure that Ollama is working as expected. You can check this using this example cURL:

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama3",
  "prompt":"Why is the sky blue?"
 }'