Closed RobKnop closed 9 months ago
Maybe helpful https://github.com/lmstudio-ai/examples
Hi @RobKnop, you can already use LM Studio. Just make sure you have the CORS setting on inside LM Studio when you start the API server. Then inside the Copilot settings, set "OpenAl Proxy Base URL (3rd-party providers)" to the correct endpoint. Afterward, conversations with the OpenAI GPT models will be sent to LM Studio instead.
Note that QA: Active Note won't work since LM Studio does not support the /embeddings endpoint to my understanding.
Thanks for answering.
I just tried it. It does not work. I get a LangChain Error. Something with streaming or not streaming.
LM Studio says:
Start a local HTTP server on your chosen port.
Request and response formats follow OpenAI's Chat Completion API.
Both streaming and non-streaming usages are supported.
So I don't know which side (either Copilot or LM Studio made an implementation mistake)
LM Studio is more convenient and easier to use than LocalAI.
https://lmstudio.ai
LM Studio also has an OpenAI drop-in replacing API.
Otherwise: Great work so far!