BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.65k stars 1.47k forks source link

[Feature]: [29/01/2024 - 05/02/2024] New Models/Endpoints/Providers/Improvements #1665

Closed ishaan-jaff closed 8 months ago

ishaan-jaff commented 8 months ago

UI v2.0 Improvements

Sharing Proxy API Keys

Misc

krrishdholakia commented 8 months ago

Known Backlog:

  1. OpenMeter spend tracking support - https://github.com/BerriAI/litellm/issues/1268
  2. tracking - https://github.com/BerriAI/litellm/issues/997
  3. tracking - https://github.com/BerriAI/litellm/issues/1188 cc: @r1cc4rd0m4zz4
  4. tracking - https://github.com/BerriAI/litellm/issues/1192
  5. yandexgpt support requested by 2 users - https://github.com/BerriAI/litellm/issues/1254
  6. Aphrodite engine endpoint - https://github.com/PygmalionAI/aphrodite-engine, https://github.com/BerriAI/litellm/pull/1153
  7. TheBloke/NexusRaven-V2-13B-GGUF function calling support https://github.com/BerriAI/litellm/issues/1060 cc: @homanp
  8. Predibase lorax support - [Feature]: Add support for Predibase Lorax apis #1253
  9. Deepsparse CPU inference support - https://github.com/neuralmagic/deepsparse/issues/1452
  10. Consistent format for model prices and context window - https://github.com/BerriAI/litellm/issues/1375
  11. Faster bedrock embedding calls - https://github.com/BerriAI/litellm/issues/1798