Open Nestlium opened 1 year ago
@Nestlium The model uses llama2
by default. What worked for me was pulling llama2, but there's an option in the plugin settings called New Command Model that should let you change the model used.
@8bitbuddhist Thanks. This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.
@8bitbuddhist Thanks. This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.
I still can't figure this out, could you explain what you did? What is the 'new command' and 'proper model name' you used.
@8bitbuddhist Thanks. This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.
I still can't figure this out, could you explain what you did? What is the 'new command' and 'proper model name' you used.
In the plugin settings, look for the option "New Command Model", and enter the name of the LLM model you're using. This is the same name you'd use when running ollama pull [model name]
or ollama run [model name]
For example, to run Mistral:
@8bitbuddhist Thanks. This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.
I still can't figure this out, could you explain what you did? What is the 'new command' and 'proper model name' you used.
Additional to @8bitbuddhist steps. Following troubleshooting steps helped me:
curl -X POST http://localhost:11434/api/generate -d '{
"model": "mistral",
"prompt": "Tell me why sky is blue",
"system": "You are an AI assistant who help answer queries."
}'
Screenshot on where to set the debugger
it would be great if the plugin allowed the user to choose the model from a drop down menu
Facing the same error
No network call shows up in network panel. Sending a request manually works The variables inside the request body also have the correct values
Edit : Fixed by removing the /
in the end of Ollama URL which was suggested here
https://github.com/hinterdupfinger/obsidian-ollama/issues/8#issuecomment-1865300368
Manually overriding the llama2 in the plugin files to llama3 seems to work after a restart of Obsidian.
(The plugin folder is saved in the vault under .obsidian\plugins\ollama
.
I guess any updates will override this though, so would be nice to have an option to update the model to use via the plugin settings.
I just can't seem to get this to work. I've got Ollama running and can see that it's running using http://localhost:11434 but I keep getting this error when I try to run it. Is there any other configuration I need to do besides Ollama running and available and the plugin installed?