logancyang / obsidian-copilot

THE Copilot in Obsidian
https://www.obsidiancopilot.com/
GNU Affero General Public License v3.0
2.99k stars 209 forks source link

[BUG] Ollama service dosent work after update #579

Closed wwjCMP closed 2 months ago

wwjCMP commented 2 months ago

My Ollama service is on another local linux computer

logancyang commented 2 months ago

What do the logs say, did you enable CORS?

In your case, you probably need to set the base URL correctly to access that linux machine on your local network. So instead of using ollama as model provider, you need openai-format instead.

williamlwclwc commented 2 months ago

I had a similar issue, I got 404 on Ollama API calls. Not sure if I did anything wrong since I noticed my previous Ollama config is gone after latest update

OLLAMA_ORIGINS=app://obsidian.md* ollama serve
2024/09/01 13:31:03 routes.go:1125: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/williamliu/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[app://obsidian.md* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR:]"
time=2024-09-01T13:31:03.430-07:00 level=INFO source=images.go:753 msg="total blobs: 9"
time=2024-09-01T13:31:03.432-07:00 level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-09-01T13:31:03.432-07:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.9)"
time=2024-09-01T13:31:03.433-07:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/hm/h00h5px10kz3lgg5jpt41h_r0000gn/T/ollama396951871/runners
time=2024-09-01T13:31:03.469-07:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]"
time=2024-09-01T13:31:03.511-07:00 level=INFO source=types.go:107 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="21.3 GiB" available="21.3 GiB"
[GIN] 2024/09/01 - 13:31:10 | 200 |      60.584µs |       127.0.0.1 | GET      "/api/version"
[GIN] 2024/09/01 - 13:31:10 | 200 |    5.414416ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/09/01 - 13:31:10 | 200 |    1.543833ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/09/01 - 13:31:10 | 200 |      870.75µs |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/09/01 - 13:31:10 | 200 |    4.747875ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/09/01 - 13:31:10 | 200 |    7.364958ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/09/01 - 13:31:10 | 200 |    1.082166ms |       127.0.0.1 | GET      "/api/tags"
[GIN] 2024/09/01 - 13:31:10 | 200 |   28.872125ms |       127.0.0.1 | POST     "/api/show"
[GIN] 2024/09/01 - 13:31:46 | 404 |      619.75µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:13 | 404 |     676.292µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:13 | 404 |     257.792µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:14 | 404 |    5.028958ms |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:16 | 404 |     380.958µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:17 | 404 |     263.625µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:18 | 404 |     866.792µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:20 | 404 |      425.25µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:21 | 404 |     233.125µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:22 | 404 |     245.417µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:27 | 404 |     227.666µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:27 | 404 |     330.542µs |       127.0.0.1 | POST     "/v1/chat/completions"
[GIN] 2024/09/01 - 13:37:28 | 404 |     266.125µs |       127.0.0.1 | POST     "/v1/chat/completions"
logancyang commented 2 months ago

@williamlwclwc 404 usually means you didn't ollama pull the model name you specified. Right now, you add a Custom Model with the exact name of the model, it must be present in Ollama.

teddyzxcv commented 2 months ago

@williamlwclwc Try to run ollama ls and see is the used model exist in Ollama

williamlwclwc commented 2 months ago

Yeah I should use the model name instead of ollama, it works now, thank you

jsrdcht commented 2 months ago

What do the logs say, did you enable CORS?

In your case, you probably need to set the base URL correctly to access that linux machine on your local network. So instead of using ollama as model provider, you need openai-format instead.

When using openai-format, it always prompts for an API key. However, with the previous version using the ollama (cloud deployment) model, there was no need for an API key.

logancyang commented 2 months ago

@jsrdcht i just released 2.6.1, can you try if this fixed your issue

jsrdcht commented 2 months ago

@jsrdcht i just released 2.6.1, can you try if this fixed your issue

I‘m using 2.6.1. The new version of the plugin does not provide an option to set the Ollama URL, and when using a third-party model, it's necessary to enter the API key; otherwise, the plugin will display an error in the chat window.

logancyang commented 2 months ago

@jsrdcht 2.6.1 fills an placeholder api key for openai-format model without an api key set. Can you refresh the plugin/delete re-add model and try again?

jsrdcht commented 2 months ago

@jsrdcht 2.6.1 fills an placeholder api key for openai-format model without an api key set. Can you refresh the plugin/delete re-add model and try again?

image

wwjCMP commented 2 months ago

image maybe we need this for remote service

logancyang commented 2 months ago

@wwjCMP @jsrdcht Ah sorry i found the issue, will be fixed with #585. In the meantime, you can set a random string as API key.

wwjCMP commented 2 months ago

@wwjCMP @jsrdcht Ah sorry i found the issue, will be fixed with #585. In the meantime, you can set a random string as API key.

image

covercash2 commented 2 months ago

+1 Ollama was working perfectly until i updated :/

covercash2 commented 2 months ago

is the Ollama API not supported anymore?

logancyang commented 2 months ago

Don't you guys watch my release video? 😢

404 means you have not pulled the ollama model

wwjCMP commented 2 months ago

Don't you guys watch my release video? 😢

404 means you have not pulled the ollama model

I don't think this is the reason, I use the third party compatible way to successfully use ollama

logancyang commented 2 months ago

@wwjCMP In your case, using 3rd party openai-format provider for it is correct, too.

Ollama was not using the base URL in the Custom Model field to override its default localhost:11434, it will in 2.6.2 which I'm pushing now.

logancyang commented 2 months ago

@wwjCMP 2.6.2 is live. It should fix your issue. Reopen it not fixed.