Closed Dofine-dufei closed 2 months ago
LOG
data: {"event": "error", "conversation_id": "5d74e70a-bcf9-4a76-bc6f-ad91db9b9b2f", "message_id": "aeccf4a0-399f-499b-a1ea-49c7afceb939", "created_at": 1721690363, "code": "completion_request_error", "status": 400, "message": "[tongyi] Server Unavailable Error, Failed to invoke model qwen-max, status code: 500, message: messages with role \"tool\" must be a response to a preceeding message with \"tool_calls\"."}
To resolve the issue with invoking function calls using the qwen-max
model in your self-hosted Dify version 0.6.14, you need to ensure that messages with the role 'tool' are correctly preceded by messages with 'tool_calls'. This is a requirement for the model to process tool calls correctly.
From the provided code, it appears that the _convert_prompt_messages_to_tongyi_messages
method is responsible for converting prompt messages to the format expected by the model. Ensure that the tool calls are correctly formatted and included in the prompt messages.
Here is a snippet of the relevant code that handles tool calls:
def _convert_prompt_messages_to_tongyi_messages(self, prompt_messages: list[PromptMessage],
rich_content: bool = False) -> list[dict]:
"""
Convert prompt messages to tongyi messages
:param prompt_messages: prompt messages
:return: tongyi messages
"""
tongyi_messages = []
for prompt_message in prompt_messages:
if isinstance(prompt_message, SystemPromptMessage):
tongyi_messages.append({
'role': 'system',
'content': prompt_message.content if not rich_content else [{"text": prompt_message.content}],
})
elif isinstance(prompt_message, UserPromptMessage):
if isinstance(prompt_message.content, str):
tongyi_messages.append({
'role': 'user',
'content': prompt_message.content if not rich_content else [{"text": prompt_message.content}],
})
else:
sub_messages = []
for message_content in prompt_message.content:
if message_content.type == PromptMessageContentType.TEXT:
message_content = cast(TextPromptMessageContent, message_content)
sub_message_dict = {
"text": message_content.data
}
sub_messages.append(sub_message_dict)
elif message_content.type == PromptMessageContentType.IMAGE:
message_content = cast(ImagePromptMessageContent, message_content)
image_url = message_content.data
if message_content.data.startswith("data:"):
# convert image base64 data to file in /tmp
image_url = self._save_base64_image_to_file(message_content.data)
sub_message_dict = {
"image": image_url
}
sub_messages.append(sub_message_dict)
# resort sub_messages to ensure text is always at last
sub_messages = sorted(sub_messages, key=lambda x: 'text' in x)
tongyi_messages.append({
'role': 'user',
'content': sub_messages
})
elif isinstance(prompt_message, AssistantPromptMessage):
content = prompt_message.content
if not content:
content = ' '
tongyi_messages.append({
'role': 'assistant',
'content': content if not rich_content else [{"text": content}],
})
elif isinstance(prompt_message, ToolPromptMessage):
tongyi_messages.append({
"role": "tool",
"content": prompt_message.content,
"name": prompt_message.tool_call_id
})
else:
raise ValueError(f"Got unknown type {prompt_message}")
return tongyi_messages
Additionally, ensure that the _convert_prompt_message_to_dict
method correctly handles the conversion of ToolPromptMessage
to the expected format:
def _convert_prompt_message_to_dict(self, message: PromptMessage) -> dict:
"""
Convert PromptMessage to dict for OpenAI Compatibility API
"""
if isinstance(message, UserPromptMessage):
message = cast(UserPromptMessage, message)
if isinstance(message.content, str):
message_dict = {"role": "user", "content": message.content}
else:
raise ValueError("User message content must be str")
elif isinstance(message, AssistantPromptMessage):
message = cast(AssistantPromptMessage, message)
message_dict = {"role": "assistant", "content": message.content}
if message.tool_calls and len(message.tool_calls) > 0:
message_dict["function_call"] = {
"name": message.tool_calls[0].function.name,
"arguments": message.tool_calls[0].function.arguments
}
elif isinstance(message, SystemPromptMessage):
message = cast(SystemPromptMessage, message)
message_dict = {"role": "system", "content": message.content}
elif isinstance(message, ToolPromptMessage):
message = cast(ToolPromptMessage, message)
message_dict = {
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": message.tool_call_id,
"content": message.content
}]
}
else:
raise ValueError(f"Unknown message type {type(message)}")
return message_dict
Ensure that your prompt messages are structured correctly, with the appropriate 'tool_calls' message preceding any 'tool' messages. This should resolve the "messages with role 'tool' must be a response to a preceding message with 'tool_calls'" error [1][2][3][4][5].
To continue talking to Dosu, mention @dosu.
I fixed it!!!
I fixed it!!!
Congrats! Are you interested in making a PR?
Self Checks
Dify version
0.6.14
Cloud or Self Hosted
Self Hosted (Source)
Steps to reproduce
on 2024.07.23,I can not invoke fc with some qwen llm,like qwen-max... But actually,it work last week.
✔️ Expected Behavior
I hope it invoke fc correctly
❌ Actual Behavior
However, qwen-max can not invoke correctly