your-papa / obsidian-Smart2Brain

An Obsidian plugin to interact with your privacy focused AI-Assistant making your second brain even smarter!
GNU Affero General Public License v3.0
379 stars 24 forks source link

Unable to start second brain, console error #76

Open Npahlfer opened 3 months ago

Npahlfer commented 3 months ago

What happened?

When starting the second brain I receive the error in the screenshot below and it doesn't load. I have tried to clear out the plugin data. And Ollama is running.

Error Statement

image

Smart Second Brain Version

1.0.0

Debug Info

SYSTEM INFO: Obsidian version: v1.5.12 Installer version: v1.4.13 Operating system: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000 23.1.0 Login status: not logged in Insider build toggle: off Live preview: on Base theme: adapt to system Community theme: Things v2.1.19 Snippets enabled: 0 Restricted mode: off Plugins installed: 10 Plugins enabled: 6 1: Kanban v1.5.3 2: Excalidraw v2.1.1 3: Advanced Tables v0.21.0 4: Tasks v6.2.0 5: Templater v2.2.3 6: Smart Second Brain v1.0.0

RECOMMENDATIONS: Custom theme and snippets: for cosmetic issues, please first try updating your theme and disabling your snippets. If still not fixed, please try to make the issue happen in the Sandbox Vault or disable community theme and snippets. Community plugins: for bugs, please first try updating all your plugins to latest. If still not fixed, please try to make the issue happen in the Sandbox Vault or disable community plugins.

Npahlfer commented 3 months ago

image

Npahlfer commented 3 months ago

After clearing out the plugin data and doing the init process a few more times, it started to run again.

Leo310 commented 3 months ago

Even though It works after clearing the plugin data, this shouldn't happen so I will reopen this issue. Can you reproduce it?

jkunczik commented 2 months ago

I have the same issue. It happens every when I start the chat after the vault has been closed. Clearing the plugin data works, but only for the session.

I am using an external Ollama instance and have reproduced the problem with different models (I tried llama3:8b and wizardlm2:7b). I am using nomic-embed-text as embedding model.

As a side note: Every time I try to query my vault (after the I have cleared the plugin and rebuilt the index) I get an error message (same as #68):

Failed to run Smart Second Brain (Error: ,Error: User query is too long or a single document was longer than the context length (should not happen as we split documents by length in post processing).,). Please retry.

I am not sure if this has something to do with the other issue, or if it is unrelated.

Npahlfer commented 2 months ago

Sorry for being absent for a while @Leo310. I can't reproduce it consistently, but it does happen every now and then. I have the error active right now.

Leo310 commented 2 months ago

Thanks for the observation! We will look into it, but it may take a bit as we are currently busy with university unfortunately.

jkunczik commented 2 months ago

Don't worry, I know all about lack of time 😄 Thanks for this project!

I switched to Mixtral 8x7B. With this model, the plugin seems to load without problems.

jymcheong commented 2 months ago

This happens more frequently if you are syncing vault with iCloud. I did a test with the exact same contents on a non-cloud folders & the occurrence is almost zero.

Npahlfer commented 2 months ago

@jymcheong I can confirm that. Im also syncing my vault with iCloud.

jritsema commented 1 month ago

I'm also getting this error when using ollama/phi3 and ollama/nomic-embed-text.

Failed to run Smart Second Brain (Error: ,Error: User query is too long or a single document was longer than the context length (should not happen as we split documents by length in post processing).,). Please retry.

I'm guessing it has something to do with context window limits. I lowered the Documents to retrieve to 3 and seem to have gotten past this error.

That said, I'm not getting very good responses. Does anyone know if there's a way to get a deeper view into how the RAG is being performed? For example, it would be nice to see the actual chunks that come back from the vector search and see the final prompt that gets sent to the LLM.

I've recently tried Copilot for Obsidian and Smart Connections, and this project is the most attractive in terms of UX for setup, ease of use, look and feel, etc. I really would love it if i could get this working well on my notes.

Leo310 commented 4 weeks ago

That said, I'm not getting very good responses. Does anyone know if there's a way to get a deeper view into how the RAG is being performed? For example, it would be nice to see the actual chunks that come back from the vector search and see the final prompt that gets sent to the LLM.

You can get a deeper view of the RAG pipeline by configuring one or both of the following in the plugin settings:

  1. Enable debugging which outputs debug information in the developer console
  2. Provide a Langsmith API token, which sends your runs to the Langsmith API where you can debug your pipeline in a web interface: image

The error is a duplicate of https://github.com/your-papa/obsidian-Smart2Brain/issues/68. We will look into it.