Closed jakobdylanc closed 8 months ago
hey @jakobdylanc explain the difference between this and how it works today?
model_list:
model_name: my-model
litellm_params:
model: openai/<model-name>
api_base: os.environ/MY-API-BASE
api_key: os.environ/MY-API-KEY
The Feature
Currently users have to manually override "base_url" in their code when using a local API server. LiteLLM should support "local/" in the beginning of the model name which would pull in LOCAL_API_URL and LOCAL_API_KEY environment variables.
For example, users of oobabooga/text-generation-webui could set the model name to "local/openai/model" which indicates a locally running OpenAI compatible API server. Then they'd just have to set the following environment variables:
(LOCAL_API_KEY should be optional since local API servers usually don't need an API key. But it definitely has use cases.)
Edge case note: Some local API servers (like LM Studio) will throw an error if "api_key" is a blank string. So LiteLLM should account for this, e.g. by setting "api_key" to "Not used" if the user doesn't provide a valid LOCAL_API_KEY.
Motivation, pitch
This is the simplest and most general solution for all local API server use cases.
Twitter / LinkedIn details
No response