BerriAI / litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
https://docs.litellm.ai/docs/
Other
10.05k stars 1.12k forks source link

[Bug]: Post-API Call occurs before Pre-API Call in CustomLogger #4236

Open RyoYang opened 1 week ago

RyoYang commented 1 week ago

What happened?

In https://github.com/BerriAI/litellm/blob/3a35a58859a145a4a568548316a1930340e7440a/litellm/proxy/custom_callbacks.py Post-API Call always occurs before Pre-API Call in CustomLogger, is that expected?

Router Redis Caching=None
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)
Post-API Call
Pre-API Call
INFO:     127.0.0.1:44244 - "POST /openai/deployments/azure/gpt-35-turbo/chat/completions?api-version=2023-07-01-preview HTTP/1.1" 200 OK
ishaan async_log_success_event

Relevant log output

Router Redis Caching=None
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)
Post-API Call
Pre-API Call
INFO:     127.0.0.1:44244 - "POST /openai/deployments/azure/gpt-35-turbo/chat/completions?api-version=2023-07-01-preview HTTP/1.1" 200 OK
ishaan async_log_success_event


### Twitter / LinkedIn details

_No response_
krrishdholakia commented 1 week ago

no that's weird. Thanks for raising this.

krrishdholakia commented 1 week ago

@RyoYang can we setup a support channel over linkedin/discord?

Want to make sure we can debug this + solve any future issues quickly

Discord: link, just wave (👋 ) on #general, and i'll set it up.

LinkedIn: link