Closed diginovo closed 47 minutes ago
Hey @diginovo! 👋 I'm here to help you with any bugs, questions, or contributions you have for Dify. Let me know how I can assist you!
I found two similar discussions that might be relevant to your issue:
The issue with Qwen 2.5 not working with the defined prompt in workflows or direct chat, while Qwen 2 works fine, could be due to differences in how the prompt messages are constructed and handled for different model versions. Specifically, the _get_prompt_template
method constructs the prompt messages based on the model mode (either CHAT
or COMPLETION
), and there might be differences in how Qwen 2.5 expects these prompts compared to Qwen 2.
Here is the relevant code snippet that constructs the prompt messages:
def _get_prompt_template(
self,
node_data: QuestionClassifierNodeData,
query: str,
memory: Optional[TokenBufferMemory],
max_token_limit: int = 2000,
) -> Union[list[ChatModelMessage], CompletionModelPromptTemplate]:
model_mode = ModelMode.value_of(node_data.model.mode)
classes = node_data.classes
categories = []
for class_ in classes:
category = {"category_id": class_.id, "category_name": class_.name}
categories.append(category)
instruction = node_data.instruction or ""
input_text = query
memory_str = ""
if memory:
memory_str = memory.get_history_prompt_text(
max_token_limit=max_token_limit, message_limit=node_data.memory.window.size
)
prompt_messages = []
if model_mode == ModelMode.CHAT:
system_prompt_messages = ChatModelMessage(
role=PromptMessageRole.SYSTEM, text=QUESTION_CLASSIFIER_SYSTEM_PROMPT.format(histories=memory_str)
)
prompt_messages.append(system_prompt_messages)
user_prompt_message_1 = ChatModelMessage(
role=PromptMessageRole.USER, text=QUESTION_CLASSIFIER_USER_PROMPT_1
)
prompt_messages.append(user_prompt_message_1)
assistant_prompt_message_1 = ChatModelMessage(
role=PromptMessageRole.ASSISTANT, text=QUESTION_CLASSIFIER_ASSISTANT_PROMPT_1
)
prompt_messages.append(assistant_prompt_message_1)
user_prompt_message_2 = ChatModelMessage(
role=PromptMessageRole.USER, text=QUESTION_CLASSIFIER_USER_PROMPT_2
)
prompt_messages.append(user_prompt_message_2)
assistant_prompt_message_2 = ChatModelMessage(
role=PromptMessageRole.ASSISTANT, text=QUESTION_CLASSIFIER_ASSISTANT_PROMPT_2
)
prompt_messages.append(assistant_prompt_message_2)
user_prompt_message_3 = ChatModelMessage(
role=PromptMessageRole.USER,
text=QUESTION_CLASSIFIER_USER_PROMPT_3.format(
input_text=input_text,
categories=json.dumps(categories, ensure_ascii=False),
classification_instructions=instruction,
),
)
prompt_messages.append(user_prompt_message_3)
return prompt_messages
elif model_mode == ModelMode.COMPLETION:
return CompletionModelPromptTemplate(
text=QUESTION_CLASSIFIER_COMPLETION_PROMPT.format(
histories=memory_str,
input_text=input_text,
categories=json.dumps(categories),
classification_instructions=instruction,
ensure_ascii=False,
)
)
else:
raise ValueError(f"Model mode {model_mode} not support.")
Additionally, the prompt template for Qwen 2.5 might be different, as indicated by the common_chat.json
configuration in Dify. The common_chat.json
file defines a structured prompt format that includes context and history prompts, which might not be compatible with the prompt structure expected by Qwen 2.5.
Here is the relevant part of the common_chat.json
configuration:
{
"human_prefix": "Human",
"assistant_prefix": "Assistant",
"context_prompt": "Use the following context as your learned knowledge, inside <context></context> XML tags.\n\n<context>\n{{#context#}}\n</context>\n\nWhen answer to user:\n- If you don't know, just say that you don't know.\n- If you don't know when you are not sure, ask for clarification.\nAvoid mentioning that you obtained the information from the context.\nAnd answer according to the language of the user's question.\n\n",
"histories_prompt": "Here is the chat histories between human and assistant, inside <histories></histories> XML tags.\n\n<histories>\n{{#histories#}}\n</histories>\n\n",
"system_prompt_orders": [
"context_prompt",
"pre_prompt",
"histories_prompt"
],
"query_prompt": "\n\nHuman: {{#query#}}\n\nAssistant: ",
"stops": ["\nHuman:", "</histories>"]
}
The differences in prompt construction and handling between Qwen 2 and Qwen 2.5, as well as the specific requirements of the common_chat.json
configuration, could be causing the issue [3][4].
To continue talking to Dosu, mention @dosu.
Qwen 2.5 doesn't support system prompt, so you can try Chatflow instead.
Self Checks
Dify version
0.8.3
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Tested with workflows using qwen2.5, didn't work with RAG or direct chat with defined prompt. Worked well with Qwen2.
✔️ Expected Behavior
No response
❌ Actual Behavior
No response