BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.65k stars 1.47k forks source link

[Feature]: Allow showing aliases in `/v1/models` #2837

Closed Manouchehri closed 3 weeks ago

Manouchehri commented 6 months ago

The Feature

For this config:

litellm_settings:
  fallbacks: [{"gemini-1.0-pro": ["gemini-1.0-pro-vertex"]}] # fallback to Vertex if call fails num_retries

router_settings:
  model_group_alias: {"gemini-pro": "gemini-1.0-pro"}

model_list:
  - model_name: gemini-1.0-pro-vertex
    litellm_params:
      model: vertex_ai/gemini-1.0-pro
      vertex_project: epic-project
      vertex_location: us-east4
      hide_from_model_list: True

  - model_name: gemini-1.0-pro
    litellm_params:
      model: gemini/gemini-1.0-pro
      hide_from_model_list: True

I would want this output from /v1/models:

{
  "data": [
    {
      "id": "gemini-pro",
      "object": "model",
      "created": 1677610602,
      "owned_by": "openai"
    }
  ],
  "object": "list"
}

Motivation, pitch

Same motivation as #2835, making it cleaner for our developers on what model name they should use in their projects.

Twitter / LinkedIn details

https://twitter.com/DaveManouchehri

MchLrnX commented 6 months ago

This would be really good to have.

krrishdholakia commented 3 weeks ago

This is now supported @Manouchehri