BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.85k stars 1.63k forks source link

[Bug]: weird characters in litellm proxy console logs #2196

Closed tfriedel closed 6 months ago

tfriedel commented 8 months ago

What happened?

I'm using LiteLLM ( ghcr.io/berriai/litellm:main-v1.26.3 ) as a caching proxy for chatgpt as part of a docker-compose setup. I run about 100 chat completions and sometimes it happens after a couple of dozen completions that the whole output of my console looks garbled:

litellm-1   | INSIDE parallel request limiter ASYNC SUCCESS LOGGING                                                                      litellm-1   | Async success callbacks: <litellm.proxy.hooks.max_budget_limiter._PROXY_MaxBudgetLimiter object at 0x7f0e6ae35430>         litellm-1   | Async success callbacks: <litellm.proxy.hooks.cache_control_check._PROXY_CacheControlCheck object at 0x7f0e6ae35460>       litellm-1   | INFO:     172.22.0.1:42156 - "POST /chat/completions HTTP/1.1" 200 OK                                                      litellm-1   | *p⎻U*⎻Q*⎻⎻UL␋├␊LLM P⎼⎺│≤: I┼⎽␋␍␊ P⎼⎺│≤ L⎺±±␋┼± P⎼␊↑␌▒┌┌ 
⎺⎺┐!                                                              ┌␋├␊┌┌└↑1   ≠ I┼⎽␋␍␊ M▒│ P▒⎼▒┌┌␊┌ R␊─┤␊⎽├ P⎼␊↑C▒┌┌ H⎺⎺┐                                                                                  ┌␋├␊┌┌└↑1   ≠ I┼⎽␋␍␊ M▒│ B┤␍±␊├ L␋└␋├␊⎼ P⎼␊↑C▒┌┌ H⎺⎺┐                                                                                    ┌␋├␊┌┌└↑1   ≠ ±␊├ ␌▒␌
␊: ␌▒␌
␊ ┐␊≤: N⎺┼␊_┤⎽␊⎼_▒⎻␋_┐␊≤_┤⎽␊⎼_␋␍; ┌⎺␌▒┌_⎺┼┌≤: F▒┌⎽␊                                                         ┌␋├␊┌┌└↑1   ≠ ␋┼_└␊└⎺⎼≤_⎼␊⎽┤┌├: N⎺┼␊

When I inspect this part using "docker logs" it looks like this:

Async success callbacks: <litellm.proxy.hooks.parallel_request_limiter._PROXY_MaxParallelRequestsHandler object at 0x7f0e6ae35400>
INSIDE parallel request limiter ASYNC SUCCESS LOGGING
Async success callbacks: <litellm.proxy.hooks.max_budget_limiter._PROXY_MaxBudgetLimiter object at 0x7f0e6ae35430>
Async success callbacks: <litellm.proxy.hooks.cache_control_check._PROXY_CacheControlCheck object at 0x7f0e6ae35460>                                                                                                                                                              INFO:     172.22.0.1:42156 - "POST /chat/completions HTTP/1.1" 200 OK                                                                                                                                                                                                             �pp��̽U��pQ��*pp��̽ULiteLLM Proxy: Inside Proxy Logging Pre-call hook!                                                                                                                                                                                                          Inside Max Parallel Request Pre-Call Hook
Inside Max Budget Limiter Pre-Call Hook
get cache: cache key: None_user_api_key_user_id; local_only: False
in_memory_result: None
get cache: cache result: None
Inside Cache Control Check Pre-Call Hook

Actually the characters above are not rendered properly on github: grafik

I have no idea why this happens. It's also difficult to reproduce, but will happen regularly.

Relevant log output

No response

Twitter / LinkedIn details

No response

krrishdholakia commented 8 months ago

That's so weird - do you have a sense of load we can run on our end on the proxy to enable this? @tfriedel

tfriedel commented 8 months ago

I'm afraid neither do we have the full logs nor would we be allowed to share them. We noticed also that when we run our tests a second time, i.e. when the cache is then used instead of OpenAI, this message will not appear. As a workaround we disable verbose mode for now. In case we have the time to investigate this further, maybe we can try to narrow it down more.

ishaan-jaff commented 8 months ago

do you see these logs when cache is set to False on your config ?

I suspect this is because we encode/decode when we use caching

ishaan-jaff commented 7 months ago

Hi @tfriedel wanted to follow up on this. Can we hop on a call to debug this + get your feedback on litellm ? Want to make sure we solve this issue. Sharing a link to my cal for your convenience: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat?month=2024-03

tfriedel commented 7 months ago

Hi, This problem has not appeared recently. Not sure if I accidentally changed something. Anyway currently no way of reproducing it. Will let you know if I notice it again.

Manouchehri commented 7 months ago

Hmm, my guess is this is caused by non-ASCII control characters?

ishaan-jaff commented 6 months ago

closing - we have not seen this re-appear @tfriedel feel free to re-open once you have a way to repro

ishaan-jaff commented 6 months ago

@tfriedel we now offer a Cloud Hosted LiteLLM Proxy, curious is this something that you would want to use ?