BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.62k stars 1.47k forks source link

[Bug]: vertex ai function calling #4192

Closed themrzmaster closed 2 months ago

themrzmaster commented 3 months ago

What happened?

[{'role': 'system', 'content': "You are a polite and helpful digital assistant"}, {'role': 'user', 'content': 'hi'}, ChatCompletionMessage(content='Hi! how can I help you?', role='assistant', function_call=None, tool_calls=None), {'role': 'user', 'content': 'where is my order}, {'content': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': 'call_2t6e', 'function': {'arguments': '{"order_id":""}', 'name': 'get_order_status'}, 'type': 'function'}]}, {'role': 'tool', 'content': 'Error: Invalid order number, 'tool_call_id': 'call_2t6e', 'name': 'get_order_status'}]

This message list, while processed to gemini-pro-1.5 in vertex ai integration gives error: openai.BadRequestError: Error code: 400 - {'error': {'message': 'BadRequestError: GroqException - Error code: 400 - {\'error\': {\'message\': "\'messages.4\' : for \'role:assistant\' the following must be satisfied[(\'messages.4.function_call\' : Value is not nullable)]", \'type\': \'invalid_request_error\'}}', 'type': None, 'param': None, 'code': 400}}

other models like gpt-4 works great. It was working before the last vertex update.

Relevant log output

No response

Twitter / LinkedIn details

No response

krrishdholakia commented 3 months ago

BadRequestError: GroqException - Error code: 400 - {\'error\': {\'message\': "\'messages.4\' : for \'role:assistant\

is this a vertex error @themrzmaster the error code seems to be for Groq

krrishdholakia commented 3 months ago

bump on this @themrzmaster

themrzmaster commented 3 months ago

sorry, wrong log.


Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/litellm/llms/vertex_ai.py", line 981, in async_completion
    getattr(response.candidates[0].content.parts[0], "function_call", None)
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 350, in acompletion
    response = await init_response
               ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/litellm/llms/vertex_ai.py", line 1149, in async_completion
    raise VertexAIError(status_code=500, message=str(e))

@krrishdholakia

themrzmaster commented 3 months ago

just checked the latest litellm version. Now, its not even calling any function.. never

krrishdholakia commented 3 months ago

ile "/usr/local/lib/python3.11/site-packages/litellm/llms/vertex_ai.py", line 981, in async_completion getattr(response.candidates[0].content.parts[0], "function_call", None)


IndexError: list index out of range

This shows a function call isn't being received.

What do your server logs show with --detailed_debug @themrzmaster

krrishdholakia commented 3 months ago

Can we actually do a debug call for this @themrzmaster

Will be quicker to fix what's happening - https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

krrishdholakia commented 3 months ago

cc: @Manouchehri are you seeing any issues with vertex ai function calling?

It works fine on my end

jabbasj commented 2 months ago

This issue happens to me too, it's intermittent so kinda tricky to reproduce 🤐 when using function calling through autogen, it basically alternates between the following 3 exceptions like 40% of the time (with vertex ai gemini 1.5 pro)

Issue 1: litellm.InternalServerError: VertexAIException InternalServerError - 400 Please ensure that multiturn requests alternate between user and model.

Issue 2: the same as in in this ticket (the 2nd log posted not the original one)

Issue 3: vertexai / litellm Object of type MapComposite is not JSON serializable

@krrishdholakia happy to schedule a debug call with you, let me know if u prefer I open a seperate ticket with more details

krrishdholakia commented 2 months ago

hey @jabbasj can you run the requests with vertex_ai_beta/ and if the issue persists create a ticket with a sample code snippet for repro?