langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
46.88k stars 6.63k forks source link

Workflow does not output formatted text in Preview View #3574

Closed nsvrana closed 3 months ago

nsvrana commented 5 months ago

Self Checks

Dify version

0.6.3

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

Create a complex workflow which outputs text, then use that workflow via the workspace or the "preview" view. The steps to execute are shown and the "End" output is a JSON object. However, when I look in the logs of the workflow, the output is formatted text. Additionally, the workflow loading circle never stops spinning.

✔️ Expected Behavior

To see formatted markdown text.

❌ Actual Behavior

The workflow remains in an unfinished state and the product of the output is not displayed.

image

dosubot[bot] commented 5 months ago

Hello @nsvrana! 👋 I'm Dosu, a bot designed to help you solve bugs, answer your questions, and guide you on contributing to the Dify project. While we wait for a human maintainer, I'm here to assist you. I'm taking a glance at the issue you've reported with the Preview View in Dify v0.6.3 and will provide you with a full answer shortly. Let's get this sorted out together!

crazywoola commented 5 months ago

Can you attach the workflow.yml so we can test it out if you don't mind.

nsvrana commented 5 months ago

I have stripped this of any complexity in the prompts, but it still has the same results.

app:
  description: ''
  icon: "\U0001F916"
  icon_background: '#FFEAD5'
  mode: workflow
  name: Empty Test
workflow:
  features:
    file_upload:
      image:
        enabled: false
        number_limits: 3
        transfer_methods:
        - remote_url
        - local_file
    opening_statement: ''
    retriever_resource:
      enabled: false
    sensitive_word_avoidance:
      configs: []
      enabled: false
      type: ''
    speech_to_text:
      enabled: false
    suggested_questions: []
    suggested_questions_after_answer:
      enabled: false
    text_to_speech:
      enabled: false
      language: ''
      voice: ''
  graph:
    edges:
    - data:
        sourceType: if-else
        targetType: end
      id: 1712592115351-end
      selected: false
      source: '1712592115351'
      sourceHandle: 'true'
      target: end
      targetHandle: target
      type: custom
    - data:
        sourceType: if-else
        targetType: llm
      id: 1712592115351-1712592181613
      selected: false
      source: '1712592115351'
      sourceHandle: 'false'
      target: '1712592181613'
      targetHandle: target
      type: custom
    - data:
        sourceType: llm
        targetType: if-else
      id: 1712591674103-1712592115351
      selected: false
      source: '1712591674103'
      sourceHandle: source
      target: '1712592115351'
      targetHandle: target
      type: custom
    - data:
        sourceType: start
        targetType: llm
      id: start-1712589019429
      source: start
      sourceHandle: source
      target: '1712589019429'
      targetHandle: target
      type: custom
    - data:
        sourceType: llm
        targetType: llm
      id: 1712589019429-llm
      source: '1712589019429'
      sourceHandle: source
      target: llm
      targetHandle: target
      type: custom
    - data:
        sourceType: llm
        targetType: end
      id: 1712592181613-1713285594878
      selected: false
      source: '1712592181613'
      sourceHandle: source
      target: '1713285594878'
      targetHandle: target
      type: custom
    - data:
        sourceType: llm
        targetType: llm
      id: llm-1712590128698
      source: llm
      sourceHandle: source
      target: '1712590128698'
      targetHandle: target
      type: custom
    - data:
        sourceType: llm
        targetType: llm
      id: 1712590128698-1712591674103
      source: '1712590128698'
      sourceHandle: source
      target: '1712591674103'
      targetHandle: target
      type: custom
    nodes:
    - data:
        selected: true
        title: START
        type: start
        variables:
        - default: ''
          description: null
          hint: null
          label: Who do you want to read this?
          max_length: 48
          options: null
          required: true
          type: text-input
          variable: audience
        - default: ''
          description: null
          hint: null
          label: What would you like the reader to do?
          max_length: 48
          options: null
          required: true
          type: text-input
          variable: desired_outcome
        - default: ''
          description: null
          hint: null
          label: How will the content be shared?
          max_length: null
          options:
          - LinkedIn
          - Twitter/X
          - Homebase
          - Email
          - Deck Slides
          - One Pager
          - Internal Newsletter
          required: true
          type: select
          variable: target_format
        - default: ''
          description: null
          hint: null
          label: What information have you already written?
          max_length: null
          options: null
          required: true
          type: paragraph
          variable: existing_content
      height: 168
      id: start
      position:
        x: 80
        y: 282
      positionAbsolute:
        x: 80
        y: 282
      selected: true
      type: custom
      width: 244
    - data:
        context:
          enabled: false
          variable_selector: null
        memory: null
        model:
          completion_params:
            frequency_penalty: 0.5
            max_tokens: 2048
            presence_penalty: 0.5
            stop: []
            temperature: 0.2
            top_p: 0.75
          mode: chat
          name: gpt-4
          provider: azure_openai
        prompt_template:
        - role: user
          text: '=== Input ===

            Existing Content: {{#start.existing_content#}}

            Audience: {{#start.audience#}}

            Desired Outcome: {{#start.desired_outcome#}}

            === Input ===

            Breakdown the content into the key ideas.

            '
        selected: false
        title: Content Breakdown
        type: llm
        vision:
          configs: null
          enabled: false
          variable_selector: null
      height: 98
      id: llm
      position:
        x: 778.1719349145415
        y: 282
      positionAbsolute:
        x: 778.1719349145415
        y: 282
      selected: false
      type: custom
      width: 244
    - data:
        outputs:
        - value_selector:
          - '1712590128698'
          - text
          variable: text
        selected: false
        title: End
        type: end
      height: 90
      id: end
      position:
        x: 1969.952986908449
        y: 146.40342419966382
      positionAbsolute:
        x: 1969.952986908449
        y: 146.40342419966382
      selected: false
      type: custom
      width: 244
    - data:
        context:
          enabled: false
          variable_selector: []
        desc: ''
        model:
          completion_params:
            frequency_penalty: 0.3
            max_tokens: 512
            presence_penalty: 0.2
            temperature: 0.5
            top_p: 0.85
          mode: chat
          name: gpt-4
          provider: azure_openai
        prompt_template:
        - role: system
          text: '=== Input ===

            Audience: {{#start.audience#}}

            Desired Outcome: {{#start.desired_outcome#}}

            Target Format: {{#start.target_format#}}

            Content Summary: {{#start.existing_content#}}

            === Input ===

            Write an audience brief.'
        selected: false
        title: Audience Evaluation
        type: llm
        variables: []
        vision:
          enabled: false
      height: 98
      id: '1712589019429'
      position:
        x: 489.19861296555086
        y: 282
      positionAbsolute:
        x: 489.19861296555086
        y: 282
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 244
    - data:
        context:
          enabled: false
          variable_selector: []
        desc: ''
        model:
          completion_params:
            frequency_penalty: 0.1
            max_tokens: 4096
            presence_penalty: 0.1
            temperature: 0.8
            top_p: 0.9
          mode: chat
          name: gpt-4
          provider: azure_openai
        prompt_template:
        - role: system
          text: '=== Brief ===

            Audience Brief: {{#1712589019429.text#}}

            Content Brief: {{#llm.text#}}

            Desired Outcome: {{#start.desired_outcome#}}

            Target Format: {{#start.target_format#}}

            === Brief ===

            Please write the content based on the brief you have been given and the
            brand guide.'
        selected: false
        title: Regular Writer
        type: llm
        variables: []
        vision:
          enabled: false
      height: 98
      id: '1712590128698'
      position:
        x: 1065.6627273915026
        y: 282
      positionAbsolute:
        x: 1065.6627273915026
        y: 282
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 244
    - data:
        context:
          enabled: false
          variable_selector: []
        desc: ''
        model:
          completion_params:
            frequency_penalty: 0.3
            max_tokens: 1024
            presence_penalty: 0.2
            temperature: 0.5
            top_p: 0.85
          mode: chat
          name: gpt-4
          provider: azure_openai
        prompt_template:
        - role: system
          text: 'Analyze the original input and newly written content and determine
            if the original meaning is maintained and if the content fulfills the
            audience brief. If the content fits the brief respond with "Content complete."

            === Original Content ===

            {{#start.existing_content#}}

            === Original Content ===

            === Audience Brief ===

            {{#1712589019429.text#}}

            === Audience Brief ===

            === New Content ===

            {{#1712590128698.text#}}

            === New Content ==='
        selected: false
        title: Evaluate
        type: llm
        variables: []
        vision:
          enabled: false
      height: 98
      id: '1712591674103'
      position:
        x: 1360.889268333141
        y: 282
      positionAbsolute:
        x: 1360.889268333141
        y: 282
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 244
    - data:
        conditions:
        - comparison_operator: contains
          id: '1712592116696'
          value: Content complete
          variable_selector:
          - '1712591674103'
          - text
        desc: ''
        logical_operator: and
        selected: false
        title: IF/ELSE
        type: if-else
      height: 126
      id: '1712592115351'
      position:
        x: 1669.072006005269
        y: 282
      positionAbsolute:
        x: 1669.072006005269
        y: 282
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 244
    - data:
        context:
          enabled: false
          variable_selector: []
        desc: ''
        model:
          completion_params:
            frequency_penalty: 0.3
            max_tokens: 4096
            presence_penalty: 0.2
            temperature: 0.5
            top_p: 0.85
          mode: chat
          name: gpt-4
          provider: azure_openai
        prompt_template:
        - role: system
          text: '=== Content ===

            {{#1712590128698.text#}}

            === Content ===

            === Feedback ===

            {{#1712591674103.text#}}

            === Feedback ===

            Please update the content based on the feedback you have received.'
        selected: false
        title: Rewrite
        type: llm
        variables: []
        vision:
          enabled: false
      height: 98
      id: '1712592181613'
      position:
        x: 1969.952986908449
        y: 362.84436451362143
      positionAbsolute:
        x: 1969.952986908449
        y: 362.84436451362143
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 244
    - data:
        desc: ''
        outputs:
        - value_selector:
          - '1712592181613'
          - text
          variable: text
        selected: false
        title: Rewritten End
        type: end
      height: 90
      id: '1713285594878'
      position:
        x: 2291.626855495137
        y: 362.84436451362143
      positionAbsolute:
        x: 2291.626855495137
        y: 362.84436451362143
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 244
    viewport:
      x: -506.6831142273469
      y: 266.6694411671007
      zoom: 0.5
dosubot[bot] commented 5 months ago

It looks like the issue with the Preview View not displaying formatted markdown text and the workflow appearing to be in an unfinished state could be related to how markdown rendering and loading states are managed in Dify. Specifically, the rendering of markdown in the Preview View is handled by components designed for markdown processing, and the loading state is managed by components responsible for displaying loading animations.

To address the issue, you should investigate the following areas:

  1. Markdown Rendering Components: Check the components responsible for markdown rendering, such as ReactMarkdown and its associated plugins. These components are crucial for converting markdown text into the formatted display you expect in the Preview View.

  2. Loading State Management: Examine the Loading component to ensure it correctly transitions from a loading state to a loaded state once the workflow execution completes. A perpetual loading state might indicate an issue with how the loading state is managed or an error that prevents the workflow from completing.

  3. Workflow Configuration: Since the workflow's configuration directly affects its execution, ensure that the nodes and edges are correctly set up to produce the expected output. Incorrect configurations could lead to execution issues, including the ones you're experiencing.

  4. Error Handling and Logs: Look for any errors or exceptions that might occur during the workflow execution. Errors could prevent the workflow from completing successfully and might explain why the Preview View remains in a loading state.

If after checking these areas the issue persists, providing more specific details such as error messages from the console or logs, and steps that consistently reproduce the issue, could help further diagnose and resolve the problem.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.