brianpetro / obsidian-smart-connections

Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
https://smartconnections.app
GNU General Public License v3.0
2.83k stars 185 forks source link

No Implement Error show when using a local LLM for RAG prompts #835

Closed 3-ark closed 1 month ago

3-ark commented 1 month ago

I am currently using LM Studio, which is OpenAI format, as my backend. This works fine with my settings when it's not note-related. P.S. online API works great, only the local doesn't work. logs attached below. BTW, every time I startup obsidian the importing process takes much longer time than before, and there is a flash black screen. I don't know if this is another bug, but this doesn't affect much.

Screenshot 2024-10-12 131238

3-ark commented 1 month ago

update

Works after deleted the index files and reinstall the plugin. Now it's working perfect again.

In addition to the initial bug report, it's not just local LLM didn't work, API couldn't work too. Any possible reasons? I will update again if this reproduced.

brianpetro commented 1 month ago

Thanks for the update 🌴

3-ark commented 1 month ago

Thanks for the update 🌴

Hi,

I think I find the reason, maybe you can check later when someone meets the similar problem. I changed my files and folders names after the first embedding, then the flash black screen when importing notes come back again, but this time the chat window couldn't load. So I just repeated the deleting and re-indexing. when I was deleting the vector storage I found the previous titles embeddings were still there. So the leftovers most likely is the reason.