langgenius / dify-docs

The open-source repo for docs.dify.ai
Creative Commons Attribution 4.0 International
178 stars 165 forks source link

Parameter Extraction #209

Open LLMuser opened 2 months ago

LLMuser commented 2 months ago

Hi all,

there is a manual for parameter extraction: https://github.com/langgenius/dify-docs/blob/main/en/guides/workflow/node/parameter-extractor.md

But I don´t get the workflow. Can someone explain or even can @AllenWriter share the workflow?

Cheers!

dosubot[bot] commented 2 months ago

Hi @LLMuser, I'm here to help you with any questions or issues you have while waiting for a human maintainer. If you need assistance, feel free to ask!

The workflow for parameter extraction involves several key steps:

1. Definition

Use a Large Language Model (LLM) to infer and extract structured parameters from natural language for subsequent tool invocation or HTTP requests. The parameter extractor converts natural language into tool-recognizable parameters.

2. Scenarios

  1. Extracting key parameters required by tools from natural language:

    • Example: For an Arxiv paper retrieval application, the parameter extractor can extract the paper ID from a query and use it as a tool parameter for precise querying.
    • Example Query: "What is the content of this paper: 2405.10739"
    • Extracted Parameter: 2405.10739
  2. Converting text to structured data:

    • Example: In a long story iteration generation application, the parameter extractor converts chapter content in text format to an array format for multi-round generation processing by the iteration node.
  3. Extracting structured data and using the HTTP Request:

    • Example: The parameter extractor can be used to request any accessible URL, suitable for obtaining external retrieval results, webhooks, generating images, etc.

3. How to Configure

Configuration Steps:

  1. Select the input variable: Usually the variable input for parameter extraction.
  2. Choose the model: The parameter extractor relies on the LLM's inference and structured generation capabilities.
  3. Define the parameters to extract: These can be manually added or quickly imported from existing tools.
  4. Write instructions: Providing examples can help the LLM improve the effectiveness and stability of extracting complex parameters.

Advanced Settings:

Inference Mode:

Memory:

Output Variables:

Example Workflow: Long Article Iteration Generator

  1. Start Node: Enter the story title and outline.
  2. Generate Subtitles and Outlines Node: Convert the story title and outline into complete text.
  3. Extract Subtitles and Outlines Node: Convert the story text into an array structure. The parameter to extract is sections, and the parameter type is Array[Object].
  4. Iteration Node: Use the array-formatted story outline as the input and process it within the iteration node using an LLM Node.
  5. Direct Answer Node: Achieve streaming output after each iteration.

Detailed Configuration Steps:

  1. Configure the story title (title) and outline (outline) in the Start Node.
  2. Use a Generate Subtitles and Outlines Node to convert the story title and outline into complete text.
  3. Use a Extract Subtitles and Outlines Node to convert the story text into an array (Array) structure.
  4. Use the array-formatted story outline as the input for the iteration node and process it within the iteration node using an LLM Node.
  5. Configure a Direct Reply Node inside the iteration node to achieve streaming output after each iteration.
  6. Complete debugging and preview.

This example demonstrates how parameter extraction can be used to convert text into structured data, which is then processed iteratively to generate content for each chapter of a long story [1][2][3].

To continue talking to Dosu, mention @dosu.

AllenWriter commented 2 months ago

Yes, how can I help you? To explain it in a more intuitive way, the parameter extraction node is used to convert natural language into structured content, such as JSON format content, to facilitate content processing in the next nodes.