BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.03k stars 1.52k forks source link

[Bug]: Azure AI Studio Mistral wont work with function calling #3107

Closed hooman-bayer closed 6 months ago

hooman-bayer commented 6 months ago

What happened?

Thanks again for this amazing library! I know managing so many providers with all the features they offer is quite complex. I noticed that Mistral Large support deployed using Azure AI studio does not seem to work. Not sure if it is on your side or Mistral (my suspicion is the later)

  1. Instantiate mistral-large-latest according to documentation for azure ai studio
  2. Try the function calling example according to litellm function calling example in the documentation

Relevant log output

POST Request Sent from LiteLLM:
curl -X POST \
https://mistral-large-2402-mga-serverless.swedencentral.inference.ai.azure.com/v1/ \
-d '{'model': 'mistral-large-latest', 'messages': [{'role': 'user', 'content': "What's the weather like in San Francisco, Tokyo, and Paris?"}], 'tools': [{'type': 'function', 'function': {'name': 'get_current_weather', 'description': 'Get the current weather in a given location', 'parameters': {'type': 'object', 'properties': {'location': {'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA'}, 'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit']}}, 'required': ['location']}}}], 'tool_choice': 'auto', 'extra_body': {}}'

RAW RESPONSE:
{"id": "REDACTED", "choices": [{"finish_reason": "tool_calls", "index": 0, "logprobs": null, "message": {"content": "", "role": "assistant", "function_call": null, "tool_calls": [{"id": "call_get_current_weather_0", "function": {"arguments": {"location": "San Francisco, CA", "unit": "fahrenheit"}, "name": "get_current_weather", "call_id": null}, "type": "function"}]}}], "created": 1713400368, "model": "mistral-large", "object": "chat.completion", "system_fingerprint": null, "usage": {"completion_tokens": 35, "prompt_tokens": 124, "total_tokens": 159}}

openai.py: Received openai error - Invalid response object Traceback (most recent call last):
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 6908, in convert_to_model_response_object
    role=choice["message"]["role"],
          ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 309, in __init__
    self.tool_calls.append(ChatCompletionMessageToolCall(**tool_call))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 275, in __init__
    self.function = Function(**function)
                    ^^^^^^^^^^^^^^^^^^^^
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/pydantic/main.py", line 175, in __init__
    self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for Function
arguments
  Input should be a valid string [type=string_type, input_value={'location': 'San Francis...', 'unit': 'fahrenheit'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.7/v/string_type

RAW RESPONSE:
Traceback (most recent call last):
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 6908, in convert_to_model_response_object
    role=choice["message"]["role"],
          ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 309, in __init__
    self.tool_calls.append(ChatCompletionMessageToolCall(**tool_call))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 275, in __init__
    self.function = Function(**function)
                    ^^^^^^^^^^^^^^^^^^^^
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/pydantic/main.py", line 175, in __init__
    self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for Function
arguments
  Input should be a valid string [type=string_type, input_value={'location': 'San Francis...', 'unit': 'fahrenheit'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.7/v/string_type

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/llms/openai.py", line 414, in completion
    raise e
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/llms/openai.py", line 381, in completion
    return convert_to_model_response_object(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 7027, in convert_to_model_response_object
Exception: Invalid response object Traceback (most recent call last):
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 6908, in convert_to_model_response_object
    role=choice["message"]["role"],
          ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 309, in __init__
    self.tool_calls.append(ChatCompletionMessageToolCall(**tool_call))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/litellm/utils.py", line 275, in __init__
    self.function = Function(**function)
                    ^^^^^^^^^^^^^^^^^^^^
  File "/Users/gmfsj/Library/Caches/pypoetry/virtualenvs/bayer-deepmind-FyyS42nb-py3.11/lib/python3.11/site-packages/pydantic/main.py", line 175, in __init__
    self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for Function
arguments
  Input should be a valid string [type=string_type, input_value={'location': 'San Francis...', 'unit': 'fahrenheit'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.7/v/string_type

Twitter / LinkedIn details

hooman650

krrishdholakia commented 6 months ago

Oh i see it. It looks like the response object is a dict leading to a pydantic error. Will add handling for this

krrishdholakia commented 6 months ago

fixed + testing added - https://github.com/BerriAI/litellm/commit/18e3cf8bff21008fcfec651f513d3518327812b9

@hooman-bayer should be live soon in v1.35.11+