hinterdupfinger / obsidian-ollama

MIT License
837 stars 75 forks source link

I keep getting Error while generation text: Request failed, status 404 #12

Open Nestlium opened 1 year ago

Nestlium commented 1 year ago

I just can't seem to get this to work. I've got Ollama running and can see that it's running using http://localhost:11434 but I keep getting this error when I try to run it. Is there any other configuration I need to do besides Ollama running and available and the plugin installed?

8bitbuddhist commented 1 year ago

@Nestlium The model uses llama2 by default. What worked for me was pulling llama2, but there's an option in the plugin settings called New Command Model that should let you change the model used.

skoyramsPS commented 1 year ago

@8bitbuddhist Thanks. This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.

Nestlium commented 1 year ago

@8bitbuddhist Thanks. This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.

I still can't figure this out, could you explain what you did? What is the 'new command' and 'proper model name' you used.

8bitbuddhist commented 1 year ago

@8bitbuddhist Thanks. This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.

I still can't figure this out, could you explain what you did? What is the 'new command' and 'proper model name' you used.

In the plugin settings, look for the option "New Command Model", and enter the name of the LLM model you're using. This is the same name you'd use when running ollama pull [model name] or ollama run [model name]

For example, to run Mistral:

image

skoyramsPS commented 1 year ago

@8bitbuddhist Thanks. This was the issue was for me. Once new command and proper model name was used Obisidan and Ollama integration started working.

I still can't figure this out, could you explain what you did? What is the 'new command' and 'proper model name' you used.

Additional to @8bitbuddhist steps. Following troubleshooting steps helped me:

  1. open obsidian console tab ( in ubuntu shortcut to open console ctrl+shift+i)
  2. Go 'Source' tab and look for plugin:Ollama
  3. look for line 225 or text '/api/generate'
  4. add a breakpoint
  5. You will now be able to check the exact URL, model and prompt which would be used to make a API request to Ollama
  6. Create a Curl command similar to one below example below ( replace the values from your use case)
    curl -X POST http://localhost:11434/api/generate -d '{
    "model": "mistral",
    "prompt": "Tell me why sky is blue",
    "system": "You are an AI assistant who help answer queries."
    }'
  7. Execute the curl command in terminal window and check if you get response back.

Screenshot on where to set the debugger image

lockmeister commented 11 months ago

it would be great if the plugin allowed the user to choose the model from a drop down menu

notV3NOM commented 10 months ago

Facing the same error

No network call shows up in network panel. Sending a request manually works The variables inside the request body also have the correct values

image

Edit : Fixed by removing the / in the end of Ollama URL which was suggested here https://github.com/hinterdupfinger/obsidian-ollama/issues/8#issuecomment-1865300368

rawzone commented 6 months ago

Manually overriding the llama2 in the plugin files to llama3 seems to work after a restart of Obsidian. (The plugin folder is saved in the vault under .obsidian\plugins\ollama.

I guess any updates will override this though, so would be nice to have an option to update the model to use via the plugin settings.