Closed merlijn closed 2 months ago
Hey @merlijn is this infering the models based on credentials in environment?
Can we do a 10min call to understand your use-case better? https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
I would like to suggest a similar feature that allows users to set keys in a configuration file and load these keys using the function litellm.load_keys(fpath)
. This feature offers several advantages:
Configuration File Example:
- provider: "openai"
models: [] # List of model names; if specified, only these models will be accessible
api_keys: [] # List of API keys for redundancy and different budgets
api_version: ... # Specify API version, if applicable
api_base: ... # Specify API base URL, if applicable
location: [] # List of locations, if applicable
project: ... # Project identifier, if applicable
Justification:
models
: This field allows users to specify a list of models. If models are specified, the API will be restricted to calling only these models. If not specified, the API can call any model available from the provider. This provides greater control over the resources being used.
api_keys
: Users may have multiple API keys, either due to different budgets or to ensure availability in case one service is down. By allowing a list of keys, users can switch between them automatically as needed.
Additional Fields: The api_version
, api_base
, location
, and project
fields provide additional configuration options to tailor the setup to specific requirements, ensuring flexibility and comprehensive control over the API usage.
hey @lowspace how would you handle something like azure - where api_base + api_key + model is unique (not all api bases have access to all models, not all keys work for all api_bases)
if so, use a unique id generated by the user as the primal key instead of the provider
can you give me an example of what that might look like? @lowspace
also can we move to a separate issue - will be easier to track
This is just a suggestion. I didn't think it trough because I don't know the ins and outs of LiteLLM that well. I am playing around with a simple frontend app that allows users to switch between different vendors and models to gain some experience in coding with LLMs.
Litellm is ideal for this so that I can just write code for a single API, not all the different flavours of the different vendors. At least that is the idea.
In any case. It is just a convenience, quality of life feature. If need be I just maintain a list of models in the app. No problem.
hey @merlijn got it - then i think what we can do is return the models available based on keys in the environment
what frontend app is this? Will help for e2e testing
It is a Telegram bot actually. Like I said, this is a hobby project for me to learn coding with LLMs. Not some major project that is actually in production so don't prioritise this just for me. But maybe if there is more demand for it then great. Thanks for making this project FYI :)
I can add a use case to this issue. I want to use OpenWebUI and have it use litellm instead of openai, its annoying to have to add every model for every provider to the config file manually. OpenWebUI must query the available models to show to users, but litellm does not return any models when I use a wildcard. What is worse is that "anthropic/*" shows up as a model to select in the UI.
The Feature
Hi there. I am using this config
It works in that I can call
/chat/completions
for example. However if I query/models
or/v1/models
I just get back a single modelIt would be great if it would list the models available from openai so that I don't have to maintain that list in the UI app.
Motivation, pitch
It would be very convenient to auto populate available models in a UI. you can proxy the call to /models on the open ai (or other vendor) API and return the response.
Twitter / LinkedIn details
No response