langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
51.43k stars 7.42k forks source link

There is an issue with the input variable options for the workflow chat assistant #10502

Closed BryceWG closed 1 day ago

BryceWG commented 2 days ago

Self Checks

Dify version

v0.11.0

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

My workflow project started encountering errors during conversations on the preview URL page after adding an optional input variable in all field types, as if my input was being interpreted as an input variable, as shown in the image below: thjc2kik ja5 Strangely, in the workflow edit page, everything is normal, and it also works fine when using the required input variables: ubti2yry rwb Variable setting: uwbtwm44 on2 A simple workflow DSL file example:

app:
  description: ''
  icon: dizzy
  icon_background: '#E0F2FE'
  mode: advanced-chat
  name: Test
  use_icon_as_answer_icon: true
kind: app
version: 0.1.3
workflow:
  conversation_variables: []
  environment_variables: []
  features:
    file_upload:
      allowed_file_extensions:
      - .JPG
      - .JPEG
      - .PNG
      - .GIF
      - .WEBP
      - .SVG
      allowed_file_types:
      - image
      allowed_file_upload_methods:
      - local_file
      - remote_url
      enabled: true
      fileUploadConfig:
        audio_file_size_limit: 50
        batch_count_limit: 5
        file_size_limit: 30
        image_file_size_limit: 10
        video_file_size_limit: 100
        workflow_file_upload_limit: 10
      image:
        enabled: false
        number_limits: 3
        transfer_methods:
        - local_file
        - remote_url
      number_limits: 3
    opening_statement: ''
    retriever_resource:
      enabled: true
    sensitive_word_avoidance:
      enabled: false
    speech_to_text:
      enabled: false
    suggested_questions: []
    suggested_questions_after_answer:
      enabled: false
    text_to_speech:
      enabled: true
      language: ''
      voice: ''
  graph:
    edges:
    - data:
        isInIteration: false
        sourceType: llm
        targetType: answer
      id: llm-source-answer-target
      selected: false
      source: llm
      sourceHandle: source
      target: answer
      targetHandle: target
      type: custom
      zIndex: 0
    - data:
        isInIteration: false
        sourceType: start
        targetType: llm
      id: 1719845660642-source-llm-target
      source: '1719845660642'
      sourceHandle: source
      target: llm
      targetHandle: target
      type: custom
      zIndex: 0
    nodes:
    - data:
        desc: ''
        selected: false
        title: 开始
        type: start
        variables:
        - label: test
          max_length: 48
          options:
          - test1
          - test2
          required: false
          type: select
          variable: test
      height: 89
      id: '1719845660642'
      position:
        x: 490.366960250369
        y: 299.1536028400476
      positionAbsolute:
        x: 490.366960250369
        y: 299.1536028400476
      selected: true
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 243
    - data:
        context:
          enabled: false
          variable_selector: []
        desc: ''
        memory:
          query_prompt_template: '{{#sys.query#}}'
          role_prefix:
            assistant: ''
            user: ''
          window:
            enabled: false
            size: 10
        model:
          completion_params:
            enable_search: true
            temperature: 0.4
            web_search: true
          mode: chat
          name: Qwen/Qwen2-VL-72B-Instruct
          provider: openai_api_compatible
        prompt_template:
        - edition_type: basic
          id: 5771028a-cf93-4c83-9fff-1d039a6087b3
          role: system
          text: '{{#1719845660642.test#}}'
        selected: false
        title: LLM
        type: llm
        variables: []
        vision:
          configs:
            detail: high
          enabled: false
      height: 97
      id: llm
      position:
        x: 979.8776266615433
        y: 282
      positionAbsolute:
        x: 979.8776266615433
        y: 282
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 243
    - data:
        answer: '{{#llm.text#}}'
        desc: ''
        selected: false
        title: 直接回复
        type: answer
        variables: []
      height: 102
      id: answer
      position:
        x: 1318.3116402198348
        y: 282
      positionAbsolute:
        x: 1318.3116402198348
        y: 282
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 243
    viewport:
      x: -194.81164021983886
      y: 97.99999999999903
      zoom: 1.0000000000000029

✔️ Expected Behavior

Adding optional input variable options in the web page also allows right conversation

❌ Actual Behavior

Conversation in the web page is not possible; conversations can only take place in the workflow debugging interface, and no conversation records are left in the conversation log

dosubot[bot] commented 2 days ago

I found a similar open issue that might be related to the problem you're experiencing: Optional variables not configured correctly for chatbot. This issue is still open and was last updated on November 10, 2024. The suggested solution involves checking how the system handles empty optional variables. Here are some steps you might consider:

  1. Validation and Defaults: Ensure that the BasicVariablesConfigManager.validate_and_set_defaults method is correctly validating and setting defaults for your optional variables. This method is crucial for configuration validation and setting default values where necessary.

  2. Error Messages: Look for specific error messages or logs when these variables are left empty. Dify provides error messages such as {{key}} value can not be empty for empty variable values, which might help identify the issue.

  3. Configuration Settings: Verify if there are any specific configuration settings or flags affecting the behavior of optional variables when left empty. Review your configuration files for any anomalies.

  4. Default Values: Check the default values assigned to optional variables. Incorrect defaults might lead to unexpected behavior when variables are left empty.

These steps might help in resolving the issue with optional input variables in your workflow chat assistant. If the problem persists, further investigation into the code or configuration may be necessary.

To continue talking to Dosu, mention @dosu.