BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.55k stars 1.46k forks source link

fix(utils.py): return 'response_cost' in completion call #4436

Closed krrishdholakia closed 3 months ago

krrishdholakia commented 3 months ago

Title

return 'response_cost' in completion call

PROXY

Screenshot 2024-06-26 at 5 57 58 PM

SDK

from litellm import completion 

response = litellm.completion(
            model="gpt-3.5-turbo",
            messages=[{"role": "user", "content": "Hey, how's it going?"}],
            mock_response="Hello world",
        )

print(response._hidden_params["response_cost"])

Relevant issues

Closes https://github.com/BerriAI/litellm/issues/4335

Type

🆕 New Feature

Changes

[REQUIRED] Testing - Attach a screenshot of any new tests passing locall

If UI changes, send a screenshot/GIF of working UI fixes

cc: @paul-gauthier @olad32 @superpoussin22 @emerzon

vercel[bot] commented 3 months ago

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jun 28, 2024 4:34am