Closed tfriedel closed 6 months ago
That's so weird - do you have a sense of load we can run on our end on the proxy to enable this? @tfriedel
I'm afraid neither do we have the full logs nor would we be allowed to share them. We noticed also that when we run our tests a second time, i.e. when the cache is then used instead of OpenAI, this message will not appear. As a workaround we disable verbose mode for now. In case we have the time to investigate this further, maybe we can try to narrow it down more.
do you see these logs when cache is set to False on your config ?
I suspect this is because we encode/decode when we use caching
Hi @tfriedel wanted to follow up on this. Can we hop on a call to debug this + get your feedback on litellm ? Want to make sure we solve this issue. Sharing a link to my cal for your convenience: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat?month=2024-03
Hi, This problem has not appeared recently. Not sure if I accidentally changed something. Anyway currently no way of reproducing it. Will let you know if I notice it again.
Hmm, my guess is this is caused by non-ASCII control characters?
closing - we have not seen this re-appear @tfriedel feel free to re-open once you have a way to repro
@tfriedel we now offer a Cloud Hosted LiteLLM Proxy, curious is this something that you would want to use ?
What happened?
I'm using LiteLLM ( ghcr.io/berriai/litellm:main-v1.26.3 ) as a caching proxy for chatgpt as part of a docker-compose setup. I run about 100 chat completions and sometimes it happens after a couple of dozen completions that the whole output of my console looks garbled:
When I inspect this part using "docker logs" it looks like this:
Actually the characters above are not rendered properly on github:
I have no idea why this happens. It's also difficult to reproduce, but will happen regularly.
Relevant log output
No response
Twitter / LinkedIn details
No response