FlowiseAI / Flowise

Drag & drop UI to build your customized LLM flow
https://flowiseai.com
Apache License 2.0
27.27k stars 14.07k forks source link

[FEATURE] Using LM studio with local LLM models as an endpoint server and OpenAIChatModel #1597

Open Mayorc1978 opened 5 months ago

Mayorc1978 commented 5 months ago

Describe the feature you'd like I would like a field to specify the base_url in OpeAIChatModel to be able to use the LM Studio feature that turns Local LLM models into a server endpoint with OpenAI-compatible API. Additional context Considering the generic support of most AI tools to OpenAI API, it would be useful to be able to use a local server endpoint, so that the API usage could converge, especially cause desktop computers have limited RAM/GPU power, and thus being able to serve multiple tools (Flowise, vs code assistants) with a selectable and standardized procedure without being forced to load multiple models in memory would be important.

HenryHengZJ commented 5 months ago

would u be able to achieve using CustomChatOpenAI?

where you can specify the model, base url and options: image

Mayorc1978 commented 5 months ago

Most of the tools that allow you to specify the base url worked great for me, but a few are still giving me problems (flowise included) So if you could fill those fields, it would be useful cause I tested filling BasePath with both http://localhost:1234 or http://localhost:1234/v1 and nothing happened. LM Studio doesn't show any activity. I even tried to set an environment variable with the openai base variable and it didn't work either. So filling those fields would be of great help to show how to do that properly.