BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.66k stars 1.6k forks source link

[Feature]: proxy image spend tracking support #1804

Open krrishdholakia opened 9 months ago

krrishdholakia commented 9 months ago

The Feature

Support tracking spend for openai/azure image gen models

Motivation, pitch

image gen is pretty expensive, and the proxy should cover the spend tracking for this as it's a supported endpoint

Twitter / LinkedIn details

No response

krrishdholakia commented 9 months ago
  1. We need to know the image size in the logging object - size
  2. We need to know the number of images asked to be generated - n
krrishdholakia commented 9 months ago

This should be accessible via self.optional_params

krrishdholakia commented 9 months ago

Seems like image gen responses don't get logged to spend logs due to not having a chatcmpl- id

krrishdholakia commented 9 months ago

considering putting - litellm call id and -call_type in the _hidden_params, so we can access it for logging