Closed wwjCMP closed 2 months ago
To resolve the "model 'llama2' not found" error, follow these steps:
ollama run llama2
in your terminal to ensure Ollama is running with the Llama2 model.http://localhost:54323
), navigate to the models
table and add a new row for the Llama2 model with the necessary details.user_settings
table to include "ollama/llama2"
in the models
field for your account, ensuring the Llama2 model is selectable and integrated into Quivr.These steps are based on the latest Quivr documentation, ensuring the Llama2 model is correctly recognized by the system.
To continue the conversation, mention @dosu.
All the steps you mentioned, I have completed, but it still prompts an error.
It seems like the "model 'llama2' not found" error persists despite following the initial troubleshooting steps. This issue typically indicates that the inference API cannot locate the 'llama2' model, which could be due to several reasons. Here's a more focused approach to diagnose and resolve this:
Verify Model Name: Ensure the model name "ollama/llama2" is correctly specified in all configurations, including the Supabase dashboard where you manage the models
and user_settings
tables.
Check Ollama Service Logs: Look into the Ollama service logs to confirm that the 'llama2' model is indeed loaded. If not, consider restarting the Ollama service.
Confirm Endpoint URL: Double-check the endpoint URL used by the inference API to ensure it correctly points to the 'llama2' model's location.
Review Model Compatibility: Make sure the 'llama2' model is compatible with your current version of Quivr by checking the model compatibility in the Quivr codebase.
If after these steps the issue remains, please share more detailed information about the steps you're taking when encountering this error, including any relevant code snippets or additional log outputs. This will help in pinpointing the exact cause of the problem.
To continue the conversation, mention @dosu.
I don't want to use llama2, I want to use a custom model.
To integrate your custom model within Quivr and avoid the 'llama2' model not found error, especially since you're not intending to use 'llama2', ensure you've correctly added your custom model to the Supabase models
table and updated the user_settings
table accordingly. Here's a concise guide:
models
table, and add a new row for your custom model with all necessary details.user_settings
: In the user_settings
table, add your custom model to the models
field. Ensure the model's name is correctly entered, matching exactly what Ollama expects, including case sensitivity.If you've done all the above and still face issues, it might be due to an invalid Brain ID, model compatibility, or configuration problems. Consulting the Quivr Discord community or Ollama's support resources could provide further assistance.
This approach focuses on ensuring your custom model is correctly recognized and integrated, addressing the specific error you're encountering.
To continue the conversation, mention @dosu.
I am certain that I have filled in correctly.
What happened?
A bug happened!
I have added a custom model for ollama, but it still prompts the following error.
"error":"model 'llama2' not found, try pulling it first"
Relevant log output
Twitter / LinkedIn details
No response