logancyang / obsidian-copilot

A ChatGPT Copilot in Obsidian
https://www.obsidiancopilot.com/
GNU Affero General Public License v3.0
2.35k stars 160 forks source link

Langchain fetch error #382

Open iukea1 opened 4 months ago

iukea1 commented 4 months ago

Describe the bug When I try to chat in or the QA chat / indexing I keep on getting a lang chain fetch error

My setup

The error only shows in the obsidian UI

Preventing any text generation from happening.

iukea1 commented 4 months ago

Some extra context

Langchain Python library is installed on where the models are being ran from.

tanmay-priy commented 4 months ago

Getting the same error on Ollama and on LM Studio as well. Looks like by default the model name is set to 3.5 in the request and is not changing even after switching the model. I have both mistral and llama2 on my local Request from LM Studio log as below: [2024-03-23 14:34:51.801] [INFO] Received POST request to /v1/chat/completions with body: { "model": "gpt-3.5-turbo", "temperature": 0.1, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "stream": true, "messages": [ { "role": "system", "content": "You are Obsidian Copilot, a helpful assistant that integrates AI to Obsidian note-taking." }, { "role": "user", "content": "Hello" } ] } [2024-03-23 14:34:51.802] [ERROR] Model with key 'gpt-3.5-turbo' not loaded.

logancyang commented 3 months ago

Without a screenshot of the note and console with debug mode on, it's hard to test on my side. Could you provide the screenshot?

LM studio server mode shouldn't depend on a model name since you load the model first in its UI.

istarwyh commented 3 months ago

Without a screenshot of the note and console with debug mode on, it's hard to test on my side. Could you provide the screenshot?

LM studio server mode shouldn't depend on a model name since you load the model first in its UI. Getting the same error on Ollama as well.

image

I enable the debug mode in the setting, but I don't know how to see and check the log.

iukea1 commented 3 months ago

Without a screenshot of the note and console with debug mode on, it's hard to test on my side. Could you provide the screenshot?

LM studio server mode shouldn't depend on a model name since you load the model first in its UI.

Should add to my post I am using Ollama to serve the API not LM studio

LieZiWind commented 3 months ago

I encountered the same issue, but turn out that my own mistakes have been the cause. I shall share my own experience here.

I'm using Windows PowerShell to start Ollama. In fact, you need to $env:OLLAMA_ORIGINS="app://obsidian.md*"; ollama serve in powershell; or set OLLAMA_ORIGINS=app://obsidian.md* in cmd. Remember Linux style OLLAMA_ORIGINS=app://obsidian.md* ollama serve won't work. Simply copying statements to your terminal just isn't enough. This is actually mentioned in the repo local_copilot.md, but somehow the instructions are not clear enough when seen in Obsidian.

iukea1 commented 3 months ago

I encountered the same issue, but turn out that my own mistakes have been the cause. I shall share my own experience here.

I'm using Windows PowerShell to start Ollama. In fact, you need to $env:OLLAMA_ORIGINS="app://obsidian.md*"; ollama serve in powershell; or set OLLAMA_ORIGINS=app://obsidian.md* in cmd. Remember Linux style OLLAMA_ORIGINS=app://obsidian.md* ollama serve won't work. Simply copying statements to your terminal just isn't enough. This is actually mentioned in the repo local_copilot.md, but somehow the instructions are not clear enough when seen in Obsidian.

Trying this out tonight. Thank you

adamchentianming1 commented 3 months ago

I conducted commands accordingly, and several local models pulled already, but Obsidian Copilot says ### "do not find llama2, please pull it first".

C:\Users\adam>set OLLAMA_ORIGINS=app://obsidian.md*

C:\Users\adam>ollama serve time=2024-04-19T04:28:36.174+08:00 level=INFO source=images.go:817 msg="total blobs: 35" time=2024-04-19T04:28:36.178+08:00 level=INFO source=images.go:824 msg="total unused blobs removed: 0" time=2024-04-19T04:28:36.179+08:00 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)" time=2024-04-19T04:28:36.181+08:00 level=INFO source=payload.go:28 msg="extracting embedded files" dir=C:\Users\adam\AppData\Local\Temp\ollama3461672080\runners time=2024-04-19T04:28:36.459+08:00 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7 cpu]" [GIN] 2024/04/19 - 04:28:59 | 204 | 20.2µs | 127.0.0.1 | OPTIONS "/api/chat" [GIN] 2024/04/19 - 04:28:59 | 404 | 771.5µs | 127.0.0.1 | POST "/api/chat" [GIN] 2024/04/19 - 04:28:59 | 204 | 0s | 127.0.0.1 | OPTIONS "/api/generate" [GIN] 2024/04/19 - 04:28:59 | 404 | 932.8µs | 127.0.0.1 | POST "/api/generate" [GIN] 2024/04/19 - 04:31:13 | 404 | 584.7µs | 127.0.0.1 | POST "/api/chat" [GIN] 2024/04/19 - 04:31:13 | 404 | 931.9µs | 127.0.0.1 | POST "/api/generate"

ryoppippi commented 2 months ago

Same

Gitreceiver commented 2 months ago

same error

iukea1 commented 2 months ago

@ryoppippi @Gitreceiver

I got it all working . Are you guys running on windows WLS?

ryoppippi commented 2 months ago

I'm using sonoma

ihomway commented 2 months ago

In case someone using fish-shell too, I fixed this issue by set variable OLLAMA_ORIGINS before run ollama serve:

set -gx OLLAMA_ORIGINS 'app://obsidian.md*'
Heptamelon commented 2 months ago

Can someone help me? I'm on mac sonoma, I'm using iterm2 and none of the answers above seem to work. I still get the lang chain fetch error

deeplearner5 commented 2 months ago

I get the same error - W10 in PowerShell in Windows Terminal, and Ollama server seems to be working using $env:OLLAMA_ORIGINS="app://obsidian.md*"; ollama serve , but I also get Langchain fetch error from Obsidian Copilot when I try to connect. Ollama does seem to be listening: time=2024-05-14T19:47:01.064+01:00 level=INFO source=routes.go:1052 msg="Listening on [::]:11434 (version 0.1.37)", and browsing to http://127.0.0.1:11434 gives "Ollama is running". Ollama works normally in a shell, without using the server, so the model is working.

duracell80 commented 1 month ago

This worked for me on Linux with systemd in ollama.service instead of app://

Change OLLAMA_ORIGINS="app://obsidian.md*"

To

OLLAMA_ORIGINS="*"

If running as a service and wanting to run manually with ollama serve then stop the service. Anyone know why app:// is recommended? Is it a flatpak thing or a Mac thing?

Anyways try * on its own.