BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.67k stars 1.6k forks source link

[Bug, Claude models]: "AnthropicError: No content in response" when content is empty array #3438

Closed hi019 closed 6 months ago

hi019 commented 6 months ago

What happened?

Sometimes Anthropic returns an empty content array. LiteLLM doesn't expect this and throws an error:

AnthropicError: No content in response

During handling of the above exception, another exception occurred:

APIError                                  Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in exception_type(model, original_exception, custom_llm_provider, completion_kwargs, extra_kwargs)
   8092                     else:
   8093                         exception_mapping_worked = True
-> 8094                         raise APIError(
   8095                             status_code=original_exception.status_code,
   8096                             message=f"AnthropicException - {original_exception.message}. Handle with `litellm.APIError`.",

APIError: AnthropicException - No content in response. Handle with `litellm.APIError`.

Repro:

import litellm
litellm.set_verbose=True
litellm.drop_params=True

await litellm.acompletion(
    model='anthropic/claude-3-opus-20240229',
    api_key='sk-ant',
    messages=[
        {
            'role': 'system',
            'content': "You will be given a list of fruits. Use the submitFruit function to submit a fruit. Don't say anything after."
        },
        {
            'role': 'user',
            'content': "I like apples"
        },
        {
            'content': "<thinking>The most relevant tool for this request is the submitFruit function.</thinking>",
            'role': 'assistant',
            'tool_calls': [
                {
                    'function': {
                        'arguments': '{"name": "Apple"}',
                        'name': 'submitFruit'
                    },
                    'id': 'toolu_012ZTYKWD4VqrXGXyE7kEnAK',
                    'type': 'function'
                }
            ]
        },
        {
            'role': 'tool',
            'content': '{"success":true}',
            'tool_call_id': 'toolu_012ZTYKWD4VqrXGXyE7kEnAK'
        }
    ],
    max_tokens=2000,
    temperature=1,
    tools=[
        {
            'type': 'function',
            'function': {
                'name': 'submitFruit',
                'description': 'Submits a fruit',
                'parameters': {
                    'type': 'object',
                    'properties': {
                        'name': {'type': 'string', 'description': 'The name of the fruit'}
                    },
                    'required': ['name']
                }
            }
        }
    ],
    frequency_penalty=0.8
)

Relevant log output

No response

Twitter / LinkedIn details

No response

krrishdholakia commented 6 months ago

Hi @hi019 this is not a bug, it's expected behaviour since openai's pydantic object expects content if not none, to be a non-empty string.

Screenshot 2024-05-03 at 8 43 25 PM

What would you expect to happen here?

krrishdholakia commented 6 months ago

closing as this isn't a "bug", but i'm open to your feedback on this @hi019