brianpetro / obsidian-smart-connections

Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
https://smartconnections.app
GNU General Public License v3.0
2.82k stars 185 forks source link

Errors using the chat #831

Open ReaderGuy42 opened 1 month ago

ReaderGuy42 commented 1 month ago

I just installed the plugin, but can't get it to work. I have several Ollama models installed which work.

but I'm getting this error whenever I try using the chat:

*An error occurred. See console logs for details.*

And this is in the console:

plugin:smart-connections:2397 
 POST https://openrouter.ai/api/v1/chat/completions 429 (Too Many Requests)
plugin:smart-connections:3095 
CustomEvent {isTrusted: false, data: '{"error":{"message":"Rate limit exceeded: free-models-per-day","code":429}}', source: SmartStreamer, detail: null, type: 'error', …}
plugin:smart-connections:2989 
CustomEvent {isTrusted: false, data: '{"error":{"message":"Rate limit exceeded: free-models-per-day","code":429}}', source: SmartStreamer, detail: null, type: 'error', …}

I'm also not seeing a way to choose the model, just the embed_model is that correct? I've tried several of those but no change.

Any ideas?

This is in a test vault with only one note, so there's not much to index. I wanted to test the chat before I started "connecting".

Thanks

florisvoskamp commented 1 month ago

I just installed the plugin, but can't get it to work. I have several Ollama models installed which work.

but I'm getting this error whenever I try using the chat:

*An error occurred. See console logs for details.*

And this is in the console:

plugin:smart-connections:2397 
 POST https://openrouter.ai/api/v1/chat/completions 429 (Too Many Requests)
plugin:smart-connections:3095 
CustomEvent {isTrusted: false, data: '{"error":{"message":"Rate limit exceeded: free-models-per-day","code":429}}', source: SmartStreamer, detail: null, type: 'error', …}
plugin:smart-connections:2989 
CustomEvent {isTrusted: false, data: '{"error":{"message":"Rate limit exceeded: free-models-per-day","code":429}}', source: SmartStreamer, detail: null, type: 'error', …}

I'm also not seeing a way to choose the model, just the embed_model is that correct? I've tried several of those but no change.

Any ideas?

This is in a test vault with only one note, so there's not much to index. I wanted to test the chat before I started "connecting".

Thanks

I just realized why it does not work, the settings for API keys and models have now moved to a different setting page (for which the icon was originally hidden for me). image This is where you can find it. The error is the result of no API key being configured.

ReaderGuy42 commented 1 month ago

Oh, cool, I hadn't seen that button, it was being covered by the dev console lol.

Which Model Platform do I choose for Ollama? Custom Local (OpenAI format)?

And what do need to put into hostname, protocol, path, and port?

derseebaer1 commented 1 month ago

Oh, cool, I hadn't seen that button, it was being covered by the dev console lol.

Which Model Platform do I choose for Ollama? Custom Local (OpenAI format)?

And what do need to put into hostname, protocol, path, and port?

Exactly my question too!