langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
44.45k stars 6.22k forks source link

Chatflow Node Parameter Extraction,When memory is enabled,no parameter can be extracted #6266

Closed BingGo-1 closed 1 week ago

BingGo-1 commented 1 month ago

Self Checks

Dify version

0.6.13

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

I am using chatflow to build a application. I want to extract some parameters for next step. Because of there are four parmeters, maybe user's question could lose some, so i enabled the memory of the Parameter Extraction node expecting get all the parameters through Multi-turn conversation. But in contrast , the Parameter Extraction node get nothing for the output, and i think this is a bug . I tried many times. Over all, when enable memory, the node can not get anyone of the defined parameters. And when close memory, it works fine.

✔️ Expected Behavior

I expect the Parameter Extraction node can get parameters when enabling the memory of the node.

❌ Actual Behavior

Once enable the memory, none of the parameters can be extracted. Although it works fine when close the memory, it can not get all parameters through Multi-turn conversation at this situation.

dosubot[bot] commented 1 month ago

Hey there, @BingGo-1! 👋 I'm here to help you with bugs, questions, and even how to become a contributor. Let's squash those bugs together!

To address the issue where the Parameter Extraction node in Dify version 0.6.13 fails to extract parameters when memory is enabled, ensure that the memory feature is properly configured. Here are the steps to follow:

  1. Select the input variable: Choose the variable input for parameter extraction.
  2. Choose the model: Select the model that the parameter extractor will use for inference and structured generation.
  3. Define the parameters to extract: Manually add the parameters or quickly import them from existing tools.
  4. Write instructions: Provide examples to help the LLM improve the effectiveness and stability of extracting complex parameters.
  5. Enable Memory: Ensure that the memory feature is enabled so that each input to the question classifier includes the chat history.

Advanced Settings:

Memory: When memory is enabled, each input to the question classifier will include the chat history in the conversation to help the LLM understand the context and improve question comprehension during interactive dialogues. The memory is fetched using the _fetch_memory method, which retrieves the chat history and includes it in the prompt messages.

Output Variables:

Implementation Details:

By following these steps and ensuring that memory is enabled, the Parameter Extraction node should be able to correctly extract parameters in a multi-turn conversation. This configuration will help the LLM to utilize the context from previous interactions, thereby improving the accuracy of parameter extraction in multi-turn conversations [1][2][3].

To continue talking to Dosu, mention @dosu.