logancyang / obsidian-copilot

THE Copilot in Obsidian
https://www.obsidiancopilot.com/
GNU Affero General Public License v3.0
2.97k stars 206 forks source link

Feature Request: Support of local inference server of LM Studio #147

Closed RobKnop closed 9 months ago

RobKnop commented 1 year ago

LM Studio is more convenient and easier to use than LocalAI.

https://lmstudio.ai

LM Studio also has an OpenAI drop-in replacing API.

Otherwise: Great work so far!

RobKnop commented 1 year ago

Maybe helpful https://github.com/lmstudio-ai/examples

Sokole1 commented 1 year ago

Hi @RobKnop, you can already use LM Studio. Just make sure you have the CORS setting on inside LM Studio when you start the API server. Then inside the Copilot settings, set "OpenAl Proxy Base URL (3rd-party providers)" to the correct endpoint. Afterward, conversations with the OpenAI GPT models will be sent to LM Studio instead.

Note that QA: Active Note won't work since LM Studio does not support the /embeddings endpoint to my understanding.

RobKnop commented 1 year ago

Thanks for answering.

I just tried it. It does not work. I get a LangChain Error. Something with streaming or not streaming.

LM Studio says:

Start a local HTTP server on your chosen port.

Request and response formats follow OpenAI's Chat Completion API.

Both streaming and non-streaming usages are supported.

So I don't know which side (either Copilot or LM Studio made an implementation mistake)