brianpetro / obsidian-smart-connections

Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
https://smartconnections.app
GNU General Public License v3.0
2.59k stars 176 forks source link

After changing embedding model & setting custom OpenAI chat model: Plugin doesn't load, sets model from past installation, reinstall doesn't fix it #765

Open raumzeit77 opened 1 month ago

raumzeit77 commented 1 month ago

I am running [2.1.99] on my Macbook Air M2 16GB. Yesterday I wanted to try using nomic-embed-text-v1.5 and then jina-embeddings-v2 instead of bge-micro for embedding. In the case of nomic, the embedding process didn't start at all, and in the case of jina, the embedding process halted shortly before completion:

Bildschirmfoto 2024-09-04 um 19 09 03 Bildschirmfoto 2024-09-04 um 19 09 25

Why this happens I don't know, because my system is capable of embedding my whole vault using mxbai-embed-large, which seems more demanding on paper.

I also set the Smart Chat Model Platform to my local ollama llama3.1 instance using the setting portrayed in this reddit thread: https://www.reddit.com/r/LocalLLaMA/comments/1cm6u9f/comment/l3l7gy3/ This actually worked – at first. Note that I used homebrew on macos for installing ollama and homebrew also automatically initiates the serving process, so I didn't set anything manually except the parameters in the Smart Connections plugin settings.

Unfortunately, the plugin then stopped working at all. The Smart Connection and Chat interfaces don't load, the Model Platform dropdown in the settings doesn't show any options. Re-enabling the plugin, restarting obsidian, uninstalling and reinstalling the plugin, deleting all associated folders manually, hitting the various refresh buttons and commands etc. don't work – or only for a short time, like one restart.

When I reinstall the plugin and reopen Obsidian, without defining any settings, I get a super short notification reading "model set: llama3.1". I think this is the cause of the issue. Somehow the custom OpenAI model I defined in a past install persists even after reinstalling the plugin. How can this be? This message even pops up when I set another Model Platform for the Smart Chat after a reinstall (then the Plugin will fail shortly after again).

I am a novice user, so I am at my wit's end. Can you help me out?

brianpetro commented 2 weeks ago

I stick with bge-micro-v2. While the other models work sometimes, it is really hit or miss. I expect things to improve in the future, but it depends on the underlying huggingface transformer.js which is outside of my control. Other platforms may be able to run the same model, but since this is essentially implemented in a browser, it's not an apples to apples comparison 🌴