BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.88k stars 1.64k forks source link

[Bug]: random `Give Feedback / Get Help` shows up in logs #5942

Open jamesbraza opened 1 month ago

jamesbraza commented 1 month ago

What happened?

Randomly in logs, with litellm==1.48.2, a LiteLLM error will show up:

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

However, this message (source) gives no context. I don't want to litellm.set_verbose=True as it's way too verbose for my logs.

Can we have this message at least give some context (failure message, module + line number, etc.)

Relevant log output

2024-09-27 11:36:58,701 - paperqa.agents.tools - INFO - Status: Paper Count=8 | Relevant Papers=1 | Current Evidence=3 | Current Cost=$0.1187
2024-09-27 11:36:59,221 - paperqa.agents.tools - INFO - gather_evidence starting for question 'Timing of blastema emergence in pak1(RNAi) planarians after amputation'.
2024-09-27 11:37:00,758 - paperqa.agents.tools - INFO - Status: Paper Count=14 | Relevant Papers=2 | Current Evidence=3 | Current Cost=$0.2063

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

2024-09-27 11:37:05,058 - paperqa.agents.tools - INFO - Status: Paper Count=8 | Relevant Papers=1 | Current Evidence=1 | Current Cost=$0.2576
2024-09-27 11:37:06,174 - paperqa.agents.tools - INFO - Generating answer for 'When do blastema become apparent in amputated pak1(RNAi) planarians?'.

Twitter / LinkedIn details

No response

jamesbraza commented 1 month ago

Also, when you actually do litellm.set_verbose=True, then you get a deprecation warning:

2024-09-27 12:15:47,960 - LiteLLM - WARNING - `litellm.set_verbose` is deprecated. Please set `os.environ['LITELLM_LOG'] = 'DEBUG'` for debug logs.

So another related request is this default message, let's update it so what it suggests is not a deprecation warning

jamesbraza commented 1 month ago

Okay, I locally edited the source to have traceback.print_exception(type(original_exception), original_exception, original_exception.__traceback__) here, and this is the error:

Traceback (most recent call last):
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
    yield
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 377, in handle_async_request
    resp = await self._pool.handle_async_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request
    raise exc from None
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request
    response = await connection.handle_async_request(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 99, in handle_async_request
    raise exc
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 76, in handle_async_request
    stream = await self._connect(request)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 154, in _connect
    stream = await stream.start_tls(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_backends/anyio.py", line 68, in start_tls
    with map_exceptions(exc_map):
  File "/path/to/.pyenv/versions/3.12.5/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ConnectError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1554, in _request
    response = await self._client.send(
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1674, in send
    response = await self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1702, in _send_handling_auth
    response = await self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1739, in _send_handling_redirects
    response = await self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1776, in _send_single_request
    response = await transport.handle_async_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 376, in handle_async_request
    with map_httpcore_exceptions():
  File "/path/to/.pyenv/versions/3.12.5/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "/path/to/repo/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ConnectError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/path/to/repo/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 944, in acompletion
    headers, response = await self.make_openai_chat_completion_request(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 639, in make_openai_chat_completion_request
    raise e
  File "/path/to/repo/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 627, in make_openai_chat_completion_request
    await openai_aclient.chat.completions.with_raw_response.create(
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 370, in wrapped
    return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 1412, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1821, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1515, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1588, in _request
    raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/path/to/repo/.venv/lib/python3.12/site-packages/litellm/main.py", line 430, in acompletion
    response = await init_response
               ^^^^^^^^^^^^^^^^^^^
  File "/path/to/repo/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 995, in acompletion
    raise OpenAIError(
litellm.llms.OpenAI.openai.OpenAIError: Connection error.

It seems to be a flaky error from OpenAI, an openai.APIConnectionError.

@krrishdholakia why is LiteLLM not auto-retrying this without throwing an unhandled stack trace?

krrishdholakia commented 1 month ago

Can we have this message at least give some context (failure message, module + line number, etc.)

The message is raised by completion -> the retries are handled by the router. Might be helpful

There's nothing to suggest the error isn't being retried - do you see a failed request?

jamesbraza commented 1 month ago

Thanks for responding. What I see in my logs is the below showing up a lot, which makes me think something is failing.

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

I think LiteLLM should only print this ^ message when something is not being retried. If a retry is taking place, I don't want to see a random failure message showing up in my logs. Does that make sense?

krrishdholakia commented 1 month ago

@jamesbraza it should be simple to disable this:

litellm_settings:
    suppress_debug_info: false

Does this solve your problem?

jamesbraza commented 1 month ago

So I am aware one can configure that, but in general I am trying to point out that this a bad default behavior.

I think LiteLLM should change it's default so these messages only come up when there's a critical or non-retryable error