brianpetro / obsidian-smart-connections

Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
https://smartconnections.app
GNU General Public License v3.0
2.73k stars 181 forks source link

Unlock Local AI Processing in Obsidian (feature request) #302

Closed dicksensei69 closed 4 months ago

dicksensei69 commented 1 year ago

I'm writing to request a feature that would allow users to easily switch between different AI APIs within obsidian-smart-connections. Specifically, I'm interested in being able to toggle between the OpenAI API and emerging alternatives like Oobabooga's textgen and Llamacpp.

These new services offer exciting capabilities like local embeddings and on-device processing that could enhance the Obsidian experience, especially for users who want to avoid sending personal data to third parties. I've found where the API endpoint is configured in the code, and with some tweaking I may be able to switch between them manually. However, having an official option to select different APIs would provide a much smoother experience.

For those wondering, the API endpoint is currently specified in multiple locations the first being on line 1043 of main.js. url: https://api.openai.com/v1/embeddings,

line 2666 const url = "https://api.openai.com/v1/chat/completions";

line 2719 url: https://api.openai.com/v1/chat/completions,

To manually change the API, these endpoints could be modified to point to local services like Oobabooga or Anthropic. However, this involves directly editing the source code which is cumbersome.

Ideally, there could be a function that defaults to OpenAI, but allows the API URL to be easily configured as a setting. Users could then switch to local IPs or services with just a simple configuration change. Furthermore, if this setting was exposed through the GUI, it would enable seamless API swapping without any code editing required.

The open source ecosystem is rapidly evolving, and empowering users to take advantage of these new innovations aligns with Obsidian's ethos of flexibility and customization. Users would love to rely on my own local hardware for AI processing rather than being locked into a single provider.

Thank you for your consideration. Obsidian has been invaluable for my workflow, and I'm excited by its potential to integrate some of these cutting-edge AI capabilities in a privacy-preserving way. Enabling easy API switching would be a major step forward. Please let me know if I can provide any other details!

dragos240 commented 1 year ago

I may make a PR for this. I've gotten it to work on my local instance of text-generation-webui. All that needs to be done to change the URL is to open main.js and replace the OpenAI API base URL with your own. For it to work with text-generation-webui, you'll need to enable the openai extension, which mimics the endpoints of the OpenAI API. One thing I am not entirely sure about is how the embeddings play with it. I'm testing it out now.

dragos240 commented 1 year ago

Nevermind, someone beat me to it

nomadphase commented 1 year ago

In case this is not prioritised here, it may be useful to look at the Khoj/Obsidian plugin, which is opensource and enabling Llama2

brianpetro commented 1 year ago

@nomadphase thanks for sharing that project.

I checked it out and it does require a separate desktop application to be installed to use the Obsidian plugin. This is the route I expect will be necessary to utilize local models with Obsidian.

While there hasn't been much publicly to see lately in terms of plugin updates, I have been doing a lot in private that will have big implications for this plugin. For example, allowing the Smart Chat to add to and edit notes is just one long weekend away.

And during my weekday work, I've been chugging away at something that, when it makes its way into Obsidian, will be unlike anything else I've seen publicly as far as AI tools are concerned. To clarify why I bring this up now, I've been focussing on using GPT-3.5 for that project because I want the result to be compatible with local models. Basically, my hypothesis is that, if I can make it work with GPT-3.5, then the same functionality should work with local models very soon.

It's still been tough to find a local model for the embeddings that beats OpenAI's ada embeddings. If anyone comes across anything, please let me know.

And lastly, thanks everyone (@dragos240 @dicksensei69 ) for your interest in Smart Connections and I'm looking forward to making more updates soon!

Now back to it, Brian 🌴

ReliablyAwkward commented 1 year ago

I yearn to be updated on this topic, as I am now playing with Docker for windows to obtain LocalAi, such descriptions as the owner hinted upon above would be a genuine game changer.

wenlzhang commented 1 year ago

Here are some local LLM related tools that might be of interest:

huachuman commented 1 year ago

What about using g4f?

https://github.com/xtekky/gpt4free https://github.com/xiangsx/gpt4free-ts

brianpetro commented 1 year ago

@wenlzhang @huachuman thanks for the resources!

I'm still reviewing options and requirements, but I think we're pretty close to having a local embedding model.

The chat models still require an amount of hardware resources that make me pause, but we can do a lot with embeddings alone. And if we were to still use OpenAI for the chat responses while relying on a local embedding model, then that would also significantly reduce the exposure of vaults to OpenAI, as only context used for a specific query would be sent to their servers.

🌴

joelmnz commented 11 months ago

In addition to local LLM support, would you consider a LLM router such as https://withmartian.com/ that boasts faster speeds and reduced costs?

I haven't tried this service out yet but if it would be considered I would be happy to investigate further

barshag commented 9 months ago

any updates on how to connect Ollama ?

brianpetro commented 9 months ago

@barshag

V2.1 will enable configuring API endpoints for the chat model. While I can't say how featureful this option will be compared to what's possible with the OpenAI API, especially considering I intend to add significant capabilities via function calling in v2.1 and I'm not up-to-date on where local models are in that regard, the configuration should allow for integration with local models for individuals capable of setting up the model locally to be accessed via localhost.

I hope that helps!

🌴

benabhi commented 8 months ago

I desperately need this feature ^^ I tried editing the main.js openai url and changing them to my local llm with lvstudio but it didn't work.

wwjCMP commented 8 months ago

LM Studio provides proxy functionality compatible with the OpenAI API.

brianpetro commented 8 months ago

@wwjCMP yes, it does.

I've already connected it in my development version of Smart Connections.

Configurable endpoints/models is just one of the chat features that will be rolling out with v2.1. Still got a few things I'm working on, but it should be rolling out pretty soon as an early access option for supporters.

🌴

UBy commented 8 months ago

Support for an OpenRouter connection would be huge, as it gives you access to a great amount of models using the same API: https://openrouter.ai/docs#models Maybe this is a bit off topic, but related as this is just another configuration of a custom endpoint.

brianpetro commented 8 months ago

@UBy that looks interesting, thanks for the tip.

Korayem commented 8 months ago

Support for an OpenRouter connection would be huge, as it gives you access to a great amount of models using the same API: https://openrouter.ai/docs#models Maybe this is a bit off topic, but related as this is just another configuration of a custom endpoint.

was about to post a Feature Request but did a search first and found your comment @UBy

Glad @brianpetro likes it!

leethobbit commented 7 months ago

Thanks for the great plugin - I'd like to add to the requests for local LLM usage - if it's to be allowed that we can modify the base_url for the model, can we ensure it will work beyond just localhost? I think a lot of us are hosting our models on servers or gaming desktops atm, and I definitely can't run anything locally on my laptop.

Very excited for this! Sending data to a 3rd party like OpenAI is a showstopper for me and most people I know that are dabbling in the LLM space currently.

brianpetro commented 7 months ago

Hey @leethobbit , happy to hear you like Smart Connections 😊

Custom local chat modes are already partially available in the v2.1 early release. I say partially because none of the people helping me beta test v2.1 seemed to have tried to use it. I could get it working in my tests, but the local models I was testing with, the only ones I could run on an M2 8GB Mac, returned mostly gibberish.

The current implementation allows custom configuration over localhost, but I already decided to implement access to all configurations for the "custom" models. This would allow using any hostname if the endpoint accepted the OpenAI API format. That's probably what you're looking for to access your gaming machine from your laptop. But this hasn't been a priority since no one participating in the early release has indicated any interest, or even use of, the local chat models.

Maybe you can help work out the bugs once v2.1 becomes a general release, which should be relatively soon, as I have some other big updates I'm looking forward to implementing in v2.2.

Thanks for participating in the Smart Connections community! 🌴

brianpetro commented 7 months ago

Update: Thanks to the help of an individual who prefers to remain unnamed, I got the Smart Chat working with Ollama. So far, the new settings for the local chat model look like this in the v2.1 early release: Screenshot 2024-04-01 at 6 10 49 PM 🌴

Allwaysthismoment commented 7 months ago

Bravo!!

Nice update and loading it up now.


From: WFH Brian @.> Sent: Monday, April 1, 2024 3:17 PM To: brianpetro/obsidian-smart-connections @.> Cc: Allwaysthismoment @.>; Manual @.> Subject: Re: [brianpetro/obsidian-smart-connections] Unlock Local AI Processing in Obsidian (feature request) (Issue #302)

Update: Thanks to the help of an individual who prefers to remain unnamed, I got the Smart Chat working with Ollama. So far, the new settings for the local chat model look like this in the v2.1 early release: Screenshot.2024-04-01.at.6.10.49.PM.png (view on web)https://github.com/brianpetro/obsidian-smart-connections/assets/1886014/ec79e555-c042-4c32-9437-75fd0cbdf164 🌴

— Reply to this email directly, view it on GitHubhttps://github.com/brianpetro/obsidian-smart-connections/issues/302#issuecomment-2030668338, or unsubscribehttps://github.com/notifications/unsubscribe-auth/APE4YK24Z7RQH4V34YS3OITY3HMF7AVCNFSM6AAAAAA3DM4XIGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZQGY3DQMZTHA. You are receiving this because you are subscribed to this thread.Message ID: @.***>

wwjCMP commented 6 months ago

Update: Thanks to the help of an individual who prefers to remain unnamed, I got the Smart Chat working with Ollama. So far, the new settings for the local chat model look like this in the v2.1 early release: Screenshot 2024-04-01 at 6 10 49 PM 🌴

Whether it supports using the embedding model run through Ollama.

brianpetro commented 6 months ago

@wwjCMP embedding through Ollama is not yet supported. If this is something you're interested in, please make a feature request here https://github.com/brianpetro/obsidian-smart-connections/issues

daaain commented 4 months ago

Took me a few tries to get it working with LM Studio so sharing the settings below. Apparently model name is required even if it's not relevant (so I just put some random characters in there).

One thing though, even if I start my question with "Based on my notes..." it doesn't look like any context is being sent to the model. Why could that be? I tried 2 different local embedding models, but same thing.

image

daaain commented 4 months ago

Oh, I just found out that there are loads of warnings and errors in the console.

First of all I guess this is a bug? No need for an API key when using a local backend.

No API key found for custom_local. Cannot retrieve models.

Then it also seems to be struggling to retrieve the embedding and with using a tool.

What is OrtRun()?

Sorry, realised that a lot of this might be off-topic, just trying to debug the issue...

Edit: never mind, redoing the embedding once more apparently fixed it, except for it still warning about the API key and continuously trying to connect to Smart Connect and logging Smart Connect is not running, will try to connect again later over and over again.

image

huachuman commented 4 months ago

this extension used to work really well but since v2 came out it's been a nightmare. i never managed to get it working properly again.

brianpetro commented 4 months ago

Closing in favor of creating new more-specific issues since the original request, adding local model support, has been added to the latest versions 😊🌴

huachuman commented 4 months ago

weird i just said it's not working you marking all my posts as off-topic is a little suspicious but hey brian, you do you.