Closed sunxichen closed 2 days ago
To resolve the issue where Dify version 0.6.11 does not differentiate between "Tool Call" and "Function Call" message roles, resulting in a 400 error when using the openai_api_compatible
LLM in function calling agent mode, you need to ensure that the message roles are correctly mapped and handled in the code.
Here is the relevant part of the code that needs to be adjusted:
def _convert_prompt_message_to_dict(self, message: PromptMessage) -> dict:
"""
Convert PromptMessage to dict for OpenAI API format
"""
if isinstance(message, UserPromptMessage):
message = cast(UserPromptMessage, message)
if isinstance(message.content, str):
message_dict = {"role": "user", "content": message.content}
else:
sub_messages = []
for message_content in message.content:
if message_content.type == PromptMessageContentType.TEXT:
message_content = cast(PromptMessageContent, message_content)
sub_message_dict = {
"type": "text",
"text": message_content.data
}
sub_messages.append(sub_message_dict)
elif message_content.type == PromptMessageContentType.IMAGE:
message_content = cast(ImagePromptMessageContent, message_content)
sub_message_dict = {
"type": "image_url",
"image_url": {
"url": message_content.data,
"detail": message_content.detail.value
}
}
sub_messages.append(sub_message_dict)
message_dict = {"role": "user", "content": sub_messages}
elif isinstance(message, AssistantPromptMessage):
message = cast(AssistantPromptMessage, message)
message_dict = {"role": "assistant", "content": message.content}
if message.tool_calls:
function_call = message.tool_calls[0]
message_dict["function_call"] = {
"name": function_call.function.name,
"arguments": function_call.function.arguments,
}
elif isinstance(message, SystemPromptMessage):
message = cast(SystemPromptMessage, message)
message_dict = {"role": "system", "content": message.content}
elif isinstance(message, ToolPromptMessage):
message = cast(ToolPromptMessage, message)
message_dict = {
"role": "tool",
"name": message.name,
"content": message.content,
"tool_call_id": message.tool_call_id
}
else:
raise ValueError(f"Got unknown type {message}")
if message.name:
message_dict["name"] = message.name
return message_dict
In this code, ensure that ToolPromptMessage
is mapped to the role "tool" and AssistantPromptMessage
is correctly handling the function_call
attribute. This differentiation is crucial for the openai_api_compatible
LLM to correctly interpret the message roles and avoid the 400 error [1][2].
To continue talking to Dosu, mention @dosu.
Self Checks
Dify version
0.6.11
Cloud or Self Hosted
Self Hosted (Source)
Steps to reproduce
[openai_api_compatible] Error: API request failed with status code 400: {"error":{"message":"bad response status code 400 (request id: 2024062614595215754623335617148)","type":"upstream_error","param":"400","code":"bad_response_status_code"}}
I inspected the logs of the deployment framework (which uses Xinference) providing the openai_api_compatible LLM service. The log indicated:
Exception: [address=0.0.0.0:33477, pid=1686] Unsupported message role: function
.According to my understanding and the OpenAI API doc, by selecting Tool Call as Function Calling method, the message role for the tool call's response should be
tool
. Conversely, if Function Call as Function Calling method is chosen, the message role should befunction
.After reviewing the code, I found that it does not differentiate and uniformly uses
function
as the tool message role. Many openai_api_compatible service frameworks, such as TGI and Xinference, employtool
as the tool message role. This discrepancy can led to the bug.✔️ Expected Behavior
When I choose Tool Call as Fucntion Calling method, the tool message role should be
tool
, otherwise, when Function Call is choosen, it should befunction
.❌ Actual Behavior
Now, Dify does not differentiate the two scenario and uniformly uses
function
as the tool message role.