Open BingGo-1 opened 2 hours ago
I found some similar discussions and issues that might be relevant to your problem:
To ensure that the Chatflow parameter extraction node can extract multiple variables in multiple rounds of conversations and combine them from historical conversations, you need to enable the memory feature. This feature allows the LLM to include the chat history in each input, helping it understand the context and improve question comprehension during interactive dialogues.
Here is how you can configure it:
The memory feature works by including the chat history within specific tags, as shown in the following prompt structure:
### Memory
Here is the chat history between the human and assistant, provided within <histories> tags:
<histories>
{histories}
</histories>
The assistant then follows these steps to extract and combine parameters:
<histories>
tags.extract_parameter
function to create structured outputs with appropriate parameters.Additionally, you can configure the parameter extractor node with specific settings to handle memory and reasoning modes, as demonstrated in the integration tests. For example, you can set the reasoning_mode
to "function_call" or "prompt" and configure the memory window size to control how much historical data is considered:
node = init_parameter_extractor_node(
config={
"id": "llm",
"data": {
"title": "123",
"type": "parameter-extractor",
"model": {"provider": "anthropic", "name": "claude-2", "mode": "chat", "completion_params": {}},
"query": ["sys", "query"],
"parameters": [{"name": "location", "type": "string", "description": "location", "required": True}],
"reasoning_mode": "prompt",
"instruction": "",
"memory": {"window": {"enabled": True, "size": 50}},
},
},
)
By following these steps and utilizing the memory feature, the parameter extraction node will be able to extract and combine all required parameters from multiple rounds of conversations [1][2][3].
CHAT
model mode.prompt
. If the reasoning mode is set to function_call
, the memory feature is not used [4][3][1].These constraints ensure that the memory feature operates within the defined limits and is compatible with the supported model modes and reasoning modes.
To continue talking to Dosu, mention @dosu.
Self Checks
Dify version
0.7.3
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
How can The Chatflow parameter extraction node extract multiple variables in multiple rounds of conversations? Even if I turn on memory, it can only extract the most recent question parameter in each conversation, and cannot combine historical conversations. I did a simple test as belows:
i create a simple chatflow, like this
the parameter extraction node was configured like this
this node will get three params.
when i asked "the data in 2024 of Coca Cola",the node could get params normally,like this
now, i ask" the data of Coca Cola",the node could get the keyword param,like this
Because lack of the date, so the answer node ask for the date info. and i replied "2024".
Ideally, the parameter extraction node would be able to get information about both conversations and extract the three parameters, but it didn’t do that. This time it only returned the date‘2024’, leaving the previous keyword ‘Coca Cola’ behind. like this
Therefore, the problem here is that when the user does not provide all the required parameters at once, multiple rounds of dialogue will be generated. How can the parameter extraction node extract all the required parameters in multiple rounds of dialogue so that the subsequent nodes can run normally?
And i tried changing the extraction node's Reasoning Mode to ‘Function Calling’,No parameters were obtained. I still don't understand this phenomenon. I was using the qwen-max model.
✔️ Expected Behavior
The Chatflow parameter extraction node extract multiple variables in multiple rounds of conversations
❌ Actual Behavior
Only the most recent query was extracted.