Open Mayorc1978 opened 5 months ago
would u be able to achieve using CustomChatOpenAI?
where you can specify the model, base url and options:
Most of the tools that allow you to specify the base url worked great for me, but a few are still giving me problems (flowise included) So if you could fill those fields, it would be useful cause I tested filling BasePath with both http://localhost:1234
or http://localhost:1234/v1
and nothing happened. LM Studio doesn't show any activity. I even tried to set an environment variable with the openai base variable and it didn't work either.
So filling those fields would be of great help to show how to do that properly.
Describe the feature you'd like I would like a field to specify the base_url in OpeAIChatModel to be able to use the LM Studio feature that turns Local LLM models into a server endpoint with OpenAI-compatible API. Additional context Considering the generic support of most AI tools to OpenAI API, it would be useful to be able to use a local server endpoint, so that the API usage could converge, especially cause desktop computers have limited RAM/GPU power, and thus being able to serve multiple tools (Flowise, vs code assistants) with a selectable and standardized procedure without being forced to load multiple models in memory would be important.