Closed bishoco closed 1 month ago
This isn't really a bug but a requirement from Anthropic if tool use blocks are sent. If you downgrade and run with litellm.set_verbose=True
, what does the raw request look like?
I believe this is a duplicate of https://github.com/BerriAI/litellm/issues/5388
If you can share the raw request on the earlier litellm version, it would help me confirm this @bishoco
@krrishdholakia Here is the verbose output from the anthropic call that fails in 1.46.1:
Request to litellm:
litellm.completion(model='claude-3-5-sonnet-20240620', messages=[{'role': 'user', 'content': 'What is the current stock price for MELI?', 'files': None}, {'content': '', 'role': 'assistant', 'tool_calls': [{'id': 'call_yobfH70Oe6xw4Wl5hbjB0GYU', 'function': {'arguments': '{"ticker":"MELI"}', 'name': 'stock_price_volume'}, 'type': 'function'}]}, {'tool_call_id': 'call_yobfH70Oe6xw4Wl5hbjB0GYU', 'role': 'tool', 'name': 'stock_price_volume', 'content': '[{"symbol": "MELI", "price": 2077.65, "volume": 149238}]'}], stream=True, temperature=0.8)
SYNC kwargs[caching]: False; litellm.cache: None; kwargs.get('cache')['no-cache']: False
Final returned optional params: {'temperature': 0.8, 'stream': True}
POST Request Sent from LiteLLM:
curl -X POST \
https://api.anthropic.com/v1/messages \
-H 'accept: *****' -H 'anthropic-version: *****' -H 'content-type: *****' -H 'x-api-key: sk-ant-api03-nLbnVtH-p-XaqLE9CSQh0wd_2Kd6WsVVT1dxJSTeuhDX3OFgfm0********************************************' \
-d '{'messages': [{'role': 'user', 'content': [{'type': 'text', 'text': 'What is the current stock price for MELI?'}]}, {'role': 'assistant', 'content': [{'type': 'tool_use', 'id': 'call_yobfH70Oe6xw4Wl5hbjB0GYU', 'name': 'stock_price_volume', 'input': {'ticker': 'MELI'}}]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'call_yobfH70Oe6xw4Wl5hbjB0GYU', 'content': '[{"symbol": "MELI", "price": 2077.65, "volume": 149238}]'}]}], 'temperature': 0.8, 'max_tokens': 4096, 'model': 'claude-3-5-sonnet-20240620'}'
Here is the verbose output from the anthropic call that works in 1.32.1:
Request to litellm:
litellm.completion(model='claude-3-5-sonnet-20240620', messages=[{'role': 'user', 'content': 'What is the current stock price for MELI?', 'files': None}, {'content': '', 'role': 'assistant', 'tool_calls': [{'id': 'call_yobfH70Oe6xw4Wl5hbjB0GYU', 'function': {'arguments': '{"ticker":"MELI"}', 'name': 'stock_price_volume'}, 'type': 'function'}]}, {'tool_call_id': 'call_yobfH70Oe6xw4Wl5hbjB0GYU', 'role': 'tool', 'name': 'stock_price_volume', 'content': '[{"symbol": "MELI", "price": 2077.65, "volume": 149238}]'}], stream=True, temperature=0.8)
self.optional_params: {}
kwargs[caching]: False; litellm.cache: None
Final returned optional params: {'stream': True, 'temperature': 0.8}
self.optional_params: {'stream': True, 'temperature': 0.8}
POST Request Sent from LiteLLM:
curl -X POST \
https://api.anthropic.com/v1/messages \
-H 'accept: application/json' -H 'anthropic-version: 2023-06-01' -H 'content-type: application/json' -H 'x-api-key: sk-ant-api03-nLbnVtH-p-XaqLE9CSQh0wd_2Kd6WsVVT1dxJSTeuhDX3OFgfm09l53Ctpmollvn_t0bozvxGs5********************' \
-d '{'model': 'claude-3-5-sonnet-20240620', 'messages': [{'role': 'user', 'content': [{'type': 'text', 'text': 'What is the current stock price for MELI?'}]}, {'role': 'assistant', 'content': [{'type': 'text', 'text': '<function_calls>\n<invoke>\n<tool_name>stock_price_volume</tool_name>\n<parameters>\n<ticker>MELI</ticker>\n</parameters>\n</invoke>\n</function_calls>'}]}, {'role': 'user', 'content': [{'type': 'text', 'text': '<function_results>\n<result>\n<tool_name>stock_price_volume</tool_name>\n<stdout>\n[{"symbol": "MELI", "price": 2077.65, "volume": 149238}]\n</stdout>\n</result>\n</function_results>'}]}], 'temperature': 0.8, 'max_tokens': 256}'
1.46.1 has this in the request: [{'type': 'tool_use', 'id': . . . which appears to be the difference.
Got it. it looks like the older version was before anthropic had the tool call as a supported param.
The newer version uses anthropic's supported param which leads to the error being raised by anthropic
How would you expect us to handle this? @bishoco
@krrishdholakia I'm not entirely sure. I'd expect that if I make a request with tool calls that works when I make a call to OpenAI, it should also work when I make a call to Anthropic. It seems to me like litellm isn't properly translating my OpenAI formatted prompt to an Anthropic formatted prompt.
I could be missing something though. Maybe I'm not formatting for OpenAI properly and OpenAI is more lenient?
Hey @bishoco the issue is caused because anthropic exposed tool calling support, which we now use (more stable than previous xml parsing)
But it seems like they require the tools
param to be passed in for any tool calling requests (which they validate by checking the params).
This isn't a requirement by openai, which is why you're seeing the issue. If you pass in the tools list the issue would go away.
I was wondering how we might be able to help here.
@krrishdholakia I see. So, Anthropic requires tools
on all tool requests even the request that has the function call results. OpenAI doesn't require tools
on the request that has the function call results. And there is nothing litellm can do if I don't provide the tools
list.
That all makes sense. I don't think there is anything litellm can do here.
Also, I tried adding tools
into the function call result request and it now works for both Anthropic and Openai. Thank you for the help on this. I think this issue can be closed.
What would a better error message have been here?
@krrishdholakia I'm struggling to come up with a better error message here.
I was thrown off because this was working and then suddenly wasn't. It is an annoying difference between these two model APIs especially because litellm wouldn't have any way to reconcile it.
You can maybe add something in the message that indicates anthropic requires tool definitions on function call results when other models do not. That might get wordy though.
What happened?
I'm using function calling with Anthropic and OpenAI models. In litellm 1.46.1, I get an error when making a call to an Anthropic model with the results of tool call (see the error in the relevant log output). This works properly with OpenAI models. Also, if I downgrade to litellm 1.32.1, then I no longer get the error.
Here is code that triggers the error for me:
Relevant log output
Twitter / LinkedIn details
No response