langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
37.2k stars 5.03k forks source link

LLM error occurred on Azure GPT-4o #5790

Open k3ntar0 opened 1 week ago

k3ntar0 commented 1 week ago

Self Checks

Dify version

0.6.12-fix1 This issue does not occur in version 0.6.11 or earlier on my environment.

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

I have implemented a workflow using Azure OpenAI GPT-4o to generate responses. The initial part of the text generation process, where the text is streamed, runs without any issues. However, during the streaming, I encounter an interruption in the generation process with the following error, causing the processing at the LLM node to halt:

[azure_openai] Error: 'NoneType' object has no attribute 'tool_calls'

✔️ Expected Behavior

The generation by the LLM should complete and proceed to the next node in the workflow.

❌ Actual Behavior

The generation by the LLM is interrupted, and the workflow stops.

dosubot[bot] commented 1 week ago

Hello @k3ntar0! I'm here to help you with any bugs, questions, or contributions you may have. Let's work together to solve the issue you're experiencing.

To resolve the issue where the text generation process using Azure OpenAI GPT-4 in your self-hosted Dify version 0.6.12-fix1 setup is interrupted with the error 'NoneType' object has no attribute 'tool_calls', you need to ensure that the tool_calls attribute is properly initialized and handled in the _handle_generate_response and _handle_generate_stream_response methods.

The error suggests that the tool_calls attribute is being accessed on a NoneType object, which means that the tool_calls attribute is not being set correctly in some cases. Here is a potential fix:

  1. Ensure that tool_calls is initialized as an empty list if it is not provided.
  2. Update the _handle_generate_response and _handle_generate_stream_response methods to handle cases where tool_calls might be None.

Here is the updated code:

def _handle_generate_response(self, model: str, credentials: dict, response: requests.Response,
                              prompt_messages: list[PromptMessage]) -> LLMResult:

    response_json = response.json()
    completion_type = LLMMode.value_of(credentials['mode'])
    output = response_json['choices'][0]

    response_content = ''
    tool_calls = None
    function_calling_type = credentials.get('function_calling_type', 'no_call')
    if completion_type is LLMMode.CHAT:
        response_content = output.get('message', {}).get('content', '')
        if function_calling_type == 'tool_call':
            tool_calls = output.get('message', {}).get('tool_calls', [])
        elif function_calling_type == 'function_call':
            tool_calls = output.get('message', {}).get('function_call', [])

    elif completion_type is LLMMode.COMPLETION:
        response_content = output.get('text', '')

    assistant_message = AssistantPromptMessage(content=response_content, tool_calls=[])

    if tool_calls:
        if function_calling_type == 'tool_call':
            assistant_message.tool_calls = self._extract_response_tool_calls(tool_calls)
        elif function_calling_type == 'function_call':
            assistant_message.tool_calls = [self._extract_response_function_call(tool_calls)]

    usage = response_json.get("usage")
    if usage:
        # transform usage
        prompt_tokens = usage["prompt_tokens"]
        completion_tokens = usage["completion_tokens"]
    else:
        # calculate num tokens
        prompt_tokens = self._num_tokens_from_string(model, prompt_messages[0].content)
        completion_tokens = self._num_tokens_from_string(model, assistant_message.content)

    # transform usage
    usage = self._calc_response_usage(model, credentials, prompt_tokens, completion_tokens)

    # transform response
    result = LLMResult(
        model=response_json["model"],
        prompt_messages=prompt_messages,
        message=assistant_message,
        usage=usage,
    )

    return result

def _handle_generate_stream_response(self, model: str, credentials: dict, response: requests.Response,
                                     prompt_messages: list[PromptMessage]) -> Generator:
    full_assistant_content = ''
    chunk_index = 0

    def create_final_llm_result_chunk(index: int, message: AssistantPromptMessage, finish_reason: str) \
            -> LLMResultChunk:
        # calculate num tokens
        prompt_tokens = self._num_tokens_from_string(model, prompt_messages[0].content)
        completion_tokens = self._num_tokens_from_string(model, full_assistant_content)

        # transform usage
        usage = self._calc_response_usage(model, credentials, prompt_tokens, completion_tokens)

        return LLMResultChunk(
            model=model,
            prompt_messages=prompt_messages,
            delta=LLMResultChunkDelta(
                index=index,
                message=message,
                finish_reason=finish_reason,
                usage=usage
            )
        )

    # delimiter for stream response, need unicode_escape
    import codecs
    delimiter = credentials.get("stream_mode_delimiter", "\n\n")
    delimiter = codecs.decode(delimiter, "unicode_escape")

    tools_calls: list[AssistantPromptMessage.ToolCall] = []

    def increase_tool_call(new_tool_calls: list[AssistantPromptMessage.ToolCall]):
        def get_tool_call(tool_call_id: str):
            if not tool_call_id:
                return tools_calls[-1]
            tool_call = next((tool_call for tool_call in tools_calls if tool_call.id == tool_call_id), None)
            if tool_call is None:
                tool_call = AssistantPromptMessage.ToolCall(
                    id=tool_call_id,
                    type="function",
                    function=AssistantPromptMessage.ToolCall.ToolCallFunction(
                        name="",
                        arguments=""
                    )
                )
                tools_calls.append(tool_call)

            return tool_call

        for new_tool_call in new_tool_calls:
            # get tool call
            tool_call = get_tool_call(new_tool_call.function.name)
            # update tool call
            if new_tool_call.id:
                tool_call.id = new_tool_call.id
            if new_tool_call.type:
                tool_call.type = new_tool_call.type
            if new_tool_call.function.name:
                tool_call.function.name = new_tool_call.function.name
            if new_tool_call.function.arguments:
                tool_call.function.arguments += new_tool_call.function.arguments

    finish_reason = 'Unknown'

    for chunk in response.iter_lines(decode_unicode=True, delimiter=delimiter):
        chunk = chunk.strip()
        if chunk:
            # ignore sse comments
            if chunk.startswith(':'):
                continue
            decoded_chunk = chunk.strip().lstrip('data: ').lstrip()
            try:
                chunk_json = json.loads(decoded_chunk)
            # stream ended
            except json.JSONDecodeError as e:
                yield create_final_llm_result_chunk(
                    index=chunk_index + 1,
                    message=AssistantPromptMessage(content=""),
                    finish_reason="Non-JSON encountered."
                )
                break
            if not chunk_json or len(chunk_json['choices']) == 0:
                continue

            choice = chunk_json['choices'][0]
            finish_reason = chunk_json['choices'][0].get('finish_reason')
            chunk_index += 1

            if 'delta' in choice:
                delta = choice['delta']
                delta_content = delta.get('content')

                assistant_message_tool_calls = None

                if 'tool_calls' in delta and credentials.get('function_calling_type', 'no_call') == 'tool_call':
                    assistant_message_tool_calls = delta.get('tool_calls', None)
                elif 'function_call' in delta and credentials.get('function_calling_type', 'no_call') == 'function_call':
                    assistant_message_tool_calls = [{
                        'id': 'tool_call_id',
                        'type': 'function',
                        'function': delta.get('function_call', {})
                    }]

                # extract tool calls from response
                if assistant_message_tool_calls:
                    tool_calls = self._extract_response_tool_calls(assistant_message_tool_calls)
                    increase_tool_call(tool_calls)

                if delta_content is None or delta_content == '':
                    continue
                # transform assistant message to prompt message
                assistant_prompt_message = AssistantPromptMessage(
                    content=delta_content,
                )

                # reset tool calls
                tool_calls = []
                full_assistant_content += delta_content
            elif 'text' in choice:
                choice_text = choice.get('text', '')
                if choice_text == '':
                    continue

                # transform assistant message to prompt message
                assistant_prompt_message = AssistantPromptMessage(content=choice_text)
                full_assistant_content += choice_text
            else:
                continue

            yield LLMResultChunk(
                model=model,
                prompt_messages=prompt_messages,
                delta=LLMResultChunkDelta(
                    index=chunk_index,
                    message=assistant_prompt_message,
                )
            )

        chunk_index += 1

    if tools_calls:
        yield LLMResultChunk(
            model=model,
            prompt_messages=prompt_messages,
            delta=LLMResultChunkDelta(
                index=chunk_index,
                message=AssistantPromptMessage(
                    tool_calls=tools_calls,
                    content=""
                ),
            )
        )

    yield create_final_llm_result_chunk(
        index=chunk_index,
        message=AssistantPromptMessage(content=""),
        finish_reason=finish_reason
    )

By ensuring that tool_calls is initialized as an empty list if it is not provided, you can avoid the 'NoneType' object has no attribute 'tool_calls' error. Additionally, the _extract_response_tool_calls and _extract_response_function_call methods should be used to handle the extraction of tool calls from the response, ensuring that the tool_calls attribute is always properly initialized [1][2][3].

To continue talking to Dosu, mention @dosu.

laipz8200 commented 1 week ago

Could you provide us with a minimal DSL that would cause this exception?

k3ntar0 commented 1 week ago

@laipz8200 Thank you for addressing the issue. This is a minimal workflow.

app:
  description: ''
  icon: "\U0001F916"
  icon_background: '#FFEAD5'
  mode: workflow
  name: Buggy Workflow
workflow:
  features:
    file_upload:
      image:
        enabled: false
        number_limits: 3
        transfer_methods:
        - local_file
        - remote_url
    opening_statement: ''
    retriever_resource:
      enabled: true
    sensitive_word_avoidance:
      enabled: false
    speech_to_text:
      enabled: false
    suggested_questions: []
    suggested_questions_after_answer:
      enabled: false
    text_to_speech:
      enabled: false
      language: ''
      voice: ''
  graph:
    edges:
    - data:
        isInIteration: false
        sourceType: start
        targetType: llm
      id: 1719829040418-source-1719829045310-target
      source: '1719829040418'
      sourceHandle: source
      target: '1719829045310'
      targetHandle: target
      type: custom
      zIndex: 0
    - data:
        isInIteration: false
        sourceType: llm
        targetType: end
      id: 1719829045310-source-1719829102839-target
      source: '1719829045310'
      sourceHandle: source
      target: '1719829102839'
      targetHandle: target
      type: custom
      zIndex: 0
    nodes:
    - data:
        desc: ''
        selected: false
        title: Start
        type: start
        variables:
        - label: text
          max_length: 48
          options: []
          required: true
          type: text-input
          variable: text
      height: 90
      id: '1719829040418'
      position:
        x: 80
        y: 282
      positionAbsolute:
        x: 80
        y: 282
      selected: true
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 244
    - data:
        context:
          enabled: true
          variable_selector:
          - '1719829040418'
          - text
        desc: ''
        model:
          completion_params:
            temperature: 0.7
          mode: chat
          name: gpt-4o
          provider: azure_openai
        prompt_template:
        - id: 2676b1db-c839-44a9-b0eb-7589e741d8ad
          role: system
          text: '{{#context#}}'
        selected: false
        title: LLM
        type: llm
        variables: []
        vision:
          configs:
            detail: high
          enabled: true
      height: 98
      id: '1719829045310'
      position:
        x: 384
        y: 282
      positionAbsolute:
        x: 384
        y: 282
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 244
    - data:
        desc: ''
        outputs: []
        selected: false
        title: End
        type: end
      height: 54
      id: '1719829102839'
      position:
        x: 688
        y: 282
      positionAbsolute:
        x: 688
        y: 282
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 244
    viewport:
      x: -63
      y: 51.5
      zoom: 1
Screenshot 2024-07-01 at 19 21 29

This issue occurred not only on the remote server but also locally.

laipz8200 commented 1 week ago

Thank you for providing the DSL. It appears that this error is not occurring on my machine. Could you please share the API Version from your Azure configuration?

k3ntar0 commented 1 week ago

Sure.

API Version Base Model Result
2024-05-01-preview gpt-4o
2024-05-01-preview gpt-4o-2024-05-13
2024-02-01 gpt-4o
2024-02-01 gpt-4o-2024-05-13

On Azure Console, thre model version is displayed as 2024-05-13

laipz8200 commented 1 week ago

I'm still unable to reproduce this error under the same conditions on my machine. Let's wait for more test data from others.

laipz8200 commented 1 week ago

Hi @k3ntar0. Could you please provide the region where your Azure OpenAI service is hosted? This information is necessary for us to proceed. Thank you!

AlexSu1108 commented 1 week ago

I met the same problem. And I found azure openai gpt4 will also reproduce this error.

Here are some info:

shelltea commented 1 week ago

Encountered the same issue, it did not exist in version 0.6.11, but appeared after the upgrade.

shelltea commented 6 days ago

@crazywoola @laipz8200 Hello, could you please fix this issue as soon as possible? Because all the models I use are gpt4o on Azure, and now this problem is causing all my Agents to be unusable.

laipz8200 commented 6 days ago

Hi @shelltea! We are still tracking the issue. Is your model version also 2024-05-13?

k3ntar0 commented 6 days ago

@laipz8200 Sorry for late reply, its region is East US. Thank you.

shelltea commented 6 days ago

Hi @shelltea! We are still tracking the issue. Is your model version also 2024-05-13?

image
real-zony commented 3 days ago

I have the same problem.

image image

Region: East US 2

shelltea commented 2 days ago

Upgrading to version 0.6.13 still has this issue, it's so frustrating. This problem makes all Agents unusable. Rolling back the version results in errors and cannot be rolled back.

real-zony commented 2 days ago

Upgrading to version 0.6.13 still has this issue, it's so frustrating. This problem makes all Agents unusable. Rolling back the version results in errors and cannot be rolled back.

I'm not a Python expert, but this exception might be caused by the recent commit fix(api/model_runtime/azure/llm): Switch to tool_call.

My temporary solution is to modify the llm.py file, use the code version from b699945, and then run the docker build -t langgenius/dify-api:0.6.13 . command to build a new image. It's working fine now.

image image
laipz8200 commented 1 day ago

I've tested this function with the same region, API version, and model version on both my local machine and our cloud service, and I'm unable to reproduce the issue. Could someone please provide the full backend logs for further investigation?

shelltea commented 1 day ago

I've tested this function with the same region, API version, and model version on both my local machine and our cloud service, and I'm unable to reproduce the issue. Could someone please provide the full backend logs for further investigation?

@laipz8200 Hello, how would you like me to provide you with the complete logs? With the default configuration, only an error will be reported, and there are no error logs output in docker-api-1.

data: {"event": "error", "conversation_id": "183460d3-3cd2-4110-9a63-b73aaa1efec7", "message_id": "961ab6ee-b2af-4c50-bda7-cc9f50541a0e", "created_at": 1720592312, "code": "completion_request_error", "status": 400, "message": "[azure_openai] Error: 'NoneType' object has no attribute 'tool_calls'"}
laipz8200 commented 1 day ago

Hi, @shelltea! Could you please provide the traceback log from the api container? Once you trigger this error, it should be printed in the log. Would you be able to find it for me?

Additionally, could you try using our cloud service with your key to see if the issue still exists?

shelltea commented 1 day ago

Hi, @shelltea! Could you please provide the traceback log from the api container? Once you trigger this error, it should be printed in the log. Would you be able to find it for me?

Additionally, could you try using our cloud service with your key to see if the issue still exists?

Hi, @laipz8200

2024-07-10 09:05:52,754.754 INFO [MainThread] [_internal.py:97] - WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:5001
 * Running on http://172.26.0.8:5001
2024-07-10 09:05:52,754.754 INFO [MainThread] [_internal.py:97] - Press CTRL+C to quit
2024-07-10 09:05:52,760.760 INFO [MainThread] [_internal.py:97] -  * Restarting with stat
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
2024-07-10 09:06:15,766.766 WARNING [MainThread] [_internal.py:97] -  * Debugger is active!
2024-07-10 09:06:15,769.769 INFO [MainThread] [_internal.py:97] -  * Debugger PIN: 442-715-506
2024-07-10 09:08:04,106.106 INFO [Thread-2 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:04] "GET /console/api/account/profile HTTP/1.1" 200 -
2024-07-10 09:08:04,121.121 INFO [Thread-4 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:04] "GET /console/api/workspaces/current HTTP/1.1" 200 -
2024-07-10 09:08:04,143.143 DEBUG [Thread-5 (process_request_thread)] [connectionpool.py:1051] - Starting new HTTPS connection (1): updates.dify.ai:443
2024-07-10 09:08:04,282.282 INFO [Thread-3 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:04] "GET /console/api/apps?page=1&limit=30&name= HTTP/1.1" 200 -
2024-07-10 09:08:04,561.561 INFO [Thread-6 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:04] "GET /console/api/workspaces HTTP/1.1" 200 -
2024-07-10 09:08:04,566.566 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: file not found
2024-07-10 09:08:04,575.575 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,583.583 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/speech2text/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/speech2text/_position.yaml: file not found
2024-07-10 09:08:04,584.584 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/tts/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/tts/_position.yaml: file not found
2024-07-10 09:08:04,584.584 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/nvidia_nim/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/nvidia_nim/llm/_position.yaml: file not found
2024-07-10 09:08:04,590.590 INFO [Thread-10 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:04] "GET /console/api/datasets/retrieval-setting HTTP/1.1" 200 -
2024-07-10 09:08:04,595.595 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/togetherai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/togetherai/llm/_position.yaml: file not found
2024-07-10 09:08:04,597.597 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: file not found
2024-07-10 09:08:04,599.599 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/llm/_position.yaml: file not found
2024-07-10 09:08:04,603.603 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,605.605 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,612.612 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/speech2text/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/speech2text/_position.yaml: file not found
2024-07-10 09:08:04,612.612 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/llm/_position.yaml: file not found
2024-07-10 09:08:04,622.622 INFO [Thread-11 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:04] "GET /console/api/tags?type=app HTTP/1.1" 200 -
2024-07-10 09:08:04,623.623 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/tts/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/tts/_position.yaml: file not found
2024-07-10 09:08:04,625.625 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,632.632 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/llm/_position.yaml: file not found
2024-07-10 09:08:04,632.632 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/nvidia_nim/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/nvidia_nim/llm/_position.yaml: file not found
2024-07-10 09:08:04,636.636 INFO [Thread-12 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:04] "GET /console/api/features HTTP/1.1" 200 -
2024-07-10 09:08:04,637.637 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,637.637 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/togetherai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/togetherai/llm/_position.yaml: file not found
2024-07-10 09:08:04,641.641 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/llm/_position.yaml: file not found
2024-07-10 09:08:04,642.642 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/llm/_position.yaml: file not found
2024-07-10 09:08:04,644.644 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,645.645 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,645.645 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/rerank/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/rerank/_position.yaml: file not found
2024-07-10 09:08:04,650.650 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/speech2text/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/speech2text/_position.yaml: file not found
2024-07-10 09:08:04,650.650 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/llm/_position.yaml: file not found
2024-07-10 09:08:04,650.650 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,650.650 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/triton_inference_server/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/triton_inference_server/llm/_position.yaml: file not found
2024-07-10 09:08:04,654.654 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/llm/_position.yaml: file not found
2024-07-10 09:08:04,654.654 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/llm/_position.yaml: file not found
2024-07-10 09:08:04,657.657 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,661.661 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,667.667 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/llm/_position.yaml: file not found
2024-07-10 09:08:04,667.667 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/llm/_position.yaml: file not found
2024-07-10 09:08:04,668.668 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,671.671 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,673.673 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/rerank/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/rerank/_position.yaml: file not found
2024-07-10 09:08:04,673.673 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/rerank/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/rerank/_position.yaml: file not found
2024-07-10 09:08:04,675.675 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/speech2text/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/speech2text/_position.yaml: file not found
2024-07-10 09:08:04,676.676 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/speech2text/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/speech2text/_position.yaml: file not found
2024-07-10 09:08:04,676.676 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/triton_inference_server/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/triton_inference_server/llm/_position.yaml: file not found
2024-07-10 09:08:04,678.678 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/llm/_position.yaml: file not found
2024-07-10 09:08:04,678.678 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/llm/_position.yaml: file not found
2024-07-10 09:08:04,682.682 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,683.683 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/llm/_position.yaml: file not found
2024-07-10 09:08:04,683.683 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,684.684 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/rerank/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/rerank/_position.yaml: file not found
2024-07-10 09:08:04,686.686 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/speech2text/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/speech2text/_position.yaml: file not found
2024-07-10 09:08:04,687.687 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/llm/_position.yaml: file not found
2024-07-10 09:08:04,687.687 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,688.688 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/llm/_position.yaml: file not found
2024-07-10 09:08:04,695.695 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,698.698 DEBUG [Thread-8 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,704.704 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/llm/_position.yaml: file not found
2024-07-10 09:08:04,716.716 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/text_embedding/_position.yaml: file not found
2024-07-10 09:08:04,815.815 INFO [Thread-7 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:04] "GET /console/api/apps?page=1&limit=30&name= HTTP/1.1" 200 -
2024-07-10 09:08:04,949.949 DEBUG [Thread-5 (process_request_thread)] [connectionpool.py:546] - https://updates.dify.ai:443 "GET /?current_version=0.6.13 HTTP/11" 200 None
2024-07-10 09:08:04,953.953 INFO [Thread-5 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:04] "GET /console/api/version?current_version=0.6.13 HTTP/1.1" 200 -
2024-07-10 09:08:05,462.462 INFO [Thread-8 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:05] "GET /console/api/workspaces/current/model-providers HTTP/1.1" 200 -
2024-07-10 09:08:05,481.481 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: file not found
2024-07-10 09:08:05,484.484 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/nvidia_nim/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/nvidia_nim/llm/_position.yaml: file not found
2024-07-10 09:08:05,485.485 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/togetherai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/togetherai/llm/_position.yaml: file not found
2024-07-10 09:08:05,486.486 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/llm/_position.yaml: file not found
2024-07-10 09:08:05,487.487 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/llm/_position.yaml: file not found
2024-07-10 09:08:05,487.487 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/llm/_position.yaml: file not found
2024-07-10 09:08:05,488.488 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/llm/_position.yaml: file not found
2024-07-10 09:08:05,488.488 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/triton_inference_server/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/triton_inference_server/llm/_position.yaml: file not found
2024-07-10 09:08:05,490.490 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/llm/_position.yaml: file not found
2024-07-10 09:08:05,490.490 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/llm/_position.yaml: file not found
2024-07-10 09:08:05,490.490 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/llm/_position.yaml: file not found
2024-07-10 09:08:05,491.491 DEBUG [Thread-9 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/llm/_position.yaml: file not found
2024-07-10 09:08:05,505.505 INFO [Thread-9 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:05] "GET /console/api/workspaces/current/models/model-types/llm HTTP/1.1" 200 -
2024-07-10 09:08:06,471.471 INFO [Thread-14 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:06] "GET /console/api/workspaces/current/members HTTP/1.1" 200 -
2024-07-10 09:08:06,471.471 DEBUG [Thread-15 (process_request_thread)] [connectionpool.py:1051] - Starting new HTTPS connection (1): tmpl.dify.ai:443
2024-07-10 09:08:06,530.530 INFO [Thread-13 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:06] "GET /console/api/installed-apps HTTP/1.1" 200 -
2024-07-10 09:08:06,610.610 INFO [Thread-16 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:06] "GET /console/api/installed-apps HTTP/1.1" 200 -
2024-07-10 09:08:07,379.379 DEBUG [Thread-15 (process_request_thread)] [connectionpool.py:546] - https://tmpl.dify.ai:443 "GET /apps?language=zh-Hans HTTP/11" 200 None
2024-07-10 09:08:07,634.634 INFO [Thread-15 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:07] "GET /console/api/explore/apps HTTP/1.1" 200 -
2024-07-10 09:08:08,424.424 INFO [Thread-17 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:08] "GET /console/api/installed-apps/ba63b5c3-5baa-4e30-93ba-c901e37c7189/parameters HTTP/1.1" 200 -
2024-07-10 09:08:08,454.454 INFO [Thread-19 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:08] "GET /console/api/installed-apps/ba63b5c3-5baa-4e30-93ba-c901e37c7189/meta HTTP/1.1" 200 -
2024-07-10 09:08:08,460.460 INFO [Thread-18 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:08] "GET /console/api/installed-apps/ba63b5c3-5baa-4e30-93ba-c901e37c7189/conversations?limit=100&pinned=true HTTP/1.1" 200 -
2024-07-10 09:08:08,463.463 INFO [Thread-20 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:08] "GET /console/api/installed-apps/ba63b5c3-5baa-4e30-93ba-c901e37c7189/conversations?limit=100&pinned=false HTTP/1.1" 200 -
2024-07-10 09:08:09,736.736 INFO [Thread-21 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:09] "GET /console/api/installed-apps/ba63b5c3-5baa-4e30-93ba-c901e37c7189/messages?conversation_id=96e0661e-b10e-453e-aed0-681df29d360a&limit=20&last_id= HTTP/1.1" 200 -
2024-07-10 09:08:13,802.802 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: file not found
2024-07-10 09:08:13,803.803 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/text_embedding/_position.yaml: file not found
2024-07-10 09:08:13,803.803 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/speech2text/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/speech2text/_position.yaml: file not found
2024-07-10 09:08:13,804.804 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/tts/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/tts/_position.yaml: file not found
2024-07-10 09:08:13,805.805 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/nvidia_nim/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/nvidia_nim/llm/_position.yaml: file not found
2024-07-10 09:08:13,805.805 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/togetherai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/togetherai/llm/_position.yaml: file not found
2024-07-10 09:08:13,805.805 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/llm/_position.yaml: file not found
2024-07-10 09:08:13,806.806 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/ollama/text_embedding/_position.yaml: file not found
2024-07-10 09:08:13,806.806 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/llm/_position.yaml: file not found
2024-07-10 09:08:13,807.807 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/replicate/text_embedding/_position.yaml: file not found
2024-07-10 09:08:13,807.807 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/llm/_position.yaml: file not found
2024-07-10 09:08:13,808.808 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/huggingface_hub/text_embedding/_position.yaml: file not found
2024-07-10 09:08:13,808.808 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/llm/_position.yaml: file not found
2024-07-10 09:08:13,809.809 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/text_embedding/_position.yaml: file not found
2024-07-10 09:08:13,809.809 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/rerank/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/rerank/_position.yaml: file not found
2024-07-10 09:08:13,809.809 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/speech2text/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/xinference/speech2text/_position.yaml: file not found
2024-07-10 09:08:13,810.810 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/triton_inference_server/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/triton_inference_server/llm/_position.yaml: file not found
2024-07-10 09:08:13,810.810 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/llm/_position.yaml: file not found
2024-07-10 09:08:13,811.811 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openllm/text_embedding/_position.yaml: file not found
2024-07-10 09:08:13,811.811 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/llm/_position.yaml: file not found
2024-07-10 09:08:13,811.811 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/text_embedding/_position.yaml: file not found
2024-07-10 09:08:13,812.812 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/rerank/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/rerank/_position.yaml: file not found
2024-07-10 09:08:13,812.812 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/speech2text/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/localai/speech2text/_position.yaml: file not found
2024-07-10 09:08:13,812.812 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/llm/_position.yaml: file not found
2024-07-10 09:08:13,812.812 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/volcengine_maas/text_embedding/_position.yaml: file not found
2024-07-10 09:08:13,813.813 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/llm/_position.yaml: file not found
2024-07-10 09:08:13,813.813 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/text_embedding/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/openai_api_compatible/text_embedding/_position.yaml: file not found
2024-07-10 09:08:13,822.822 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: file not found
2024-07-10 09:08:13,824.824 DEBUG [Thread-22 (process_request_thread)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: file not found
2024-07-10 09:08:13,893.893 DEBUG [Thread-24 (_generate_worker)] [connectionpool.py:1051] - Starting new HTTPS connection (1): openaipublic.blob.core.windows.net:443
2024-07-10 09:08:15,131.131 DEBUG [Thread-24 (_generate_worker)] [connectionpool.py:546] - https://openaipublic.blob.core.windows.net:443 "GET /encodings/o200k_base.tiktoken HTTP/11" 200 3613922
2024-07-10 09:08:18,464.464 DEBUG [Thread-24 (_generate_worker)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: file not found
2024-07-10 09:08:18,500.500 DEBUG [Thread-24 (_generate_worker)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: file not found
2024-07-10 09:08:18,740.740 DEBUG [Thread-24 (_generate_worker)] [yaml_utils.py:28] - Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: Failed to load YAML file /app/api/core/model_runtime/model_providers/azure_openai/llm/_position.yaml: file not found
2024-07-10 09:08:18,742.742 DEBUG [Thread-24 (_generate_worker)] [_config.py:80] - load_ssl_context verify=True cert=None trust_env=True http2=False
2024-07-10 09:08:18,743.743 DEBUG [Thread-24 (_generate_worker)] [_config.py:146] - load_verify_locations cafile='/app/api/.venv/lib/python3.10/site-packages/certifi/cacert.pem'
2024-07-10 09:08:18,786.786 DEBUG [Thread-24 (_generate_worker)] [_config.py:80] - load_ssl_context verify=True cert=None trust_env=True http2=False
2024-07-10 09:08:18,786.786 DEBUG [Thread-24 (_generate_worker)] [_config.py:146] - load_verify_locations cafile='/app/api/.venv/lib/python3.10/site-packages/certifi/cacert.pem'
2024-07-10 09:08:18,827.827 DEBUG [Thread-24 (_generate_worker)] [_config.py:80] - load_ssl_context verify=True cert=None trust_env=True http2=False
2024-07-10 09:08:18,828.828 DEBUG [Thread-24 (_generate_worker)] [_config.py:146] - load_verify_locations cafile='/app/api/.venv/lib/python3.10/site-packages/certifi/cacert.pem'
2024-07-10 09:08:18,880.880 DEBUG [Thread-24 (_generate_worker)] [_base_client.py:446] - Request options: {'method': 'post', 'url': '/deployments/gpt-4o/chat/completions', 'headers': {'api-key': 'xxxx'}, 'files': None, 'json_data': {'messages': [{'role': 'system', 'content': 'You are a super AI assistant. Give priority to using Simplified Chinese when communicating with users.\n'}, {'role': 'user', 'content': 'hi'}], 'model': 'gpt-4o', 'frequency_penalty': 0.1, 'max_tokens': 4096, 'presence_penalty': 0.1, 'stream': True, 'temperature': 0.8, 'tools': [{'type': 'function', 'function': {'name': 'google_search', 'description': 'A tool for performing a Google SERP search and extracting snippets and webpages.Input should be a search query.', 'parameters': {'type': 'object', 'properties': {'query': {'type': 'string', 'description': 'key words for searching'}}, 'required': ['query']}}}, {'type': 'function', 'function': {'name': 'webscraper', 'description': 'A tool for scraping webpages. Input should be a URL.', 'parameters': {'type': 'object', 'properties': {'url': {'type': 'string', 'description': 'url for scraping'}}, 'required': ['url']}}}, {'type': 'function', 'function': {'name': 'current_time', 'description': 'A tool for getting the current time.', 'parameters': {'type': 'object', 'properties': {}, 'required': []}}}, {'type': 'function', 'function': {'name': 'bing_web_search', 'description': 'A tool for performing a Bing SERP search and extracting snippets and webpages.Input should be a search query.', 'parameters': {'type': 'object', 'properties': {'query': {'type': 'string', 'description': 'key words for searching'}}, 'required': ['query']}}}, {'type': 'function', 'function': {'name': 'github_repositories', 'description': 'A tool when you wants to search for popular warehouses or open source projects for any keyword. format query condition like "keywords+language:js", language can be other dev languages.', 'parameters': {'type': 'object', 'properties': {'query': {'type': 'string', 'description': 'The query of you want to search, format query condition like "keywords+language:js", language can be other dev languages.'}, 'top_n': {'type': 'number', 'description': 'Extract the first N records from the returned result.'}}, 'required': ['query', 'top_n']}}}, {'type': 'function', 'function': {'name': 'searxng_search', 'description': 'Perform searches on SearXNG and get results.', 'parameters': {'type': 'object', 'properties': {'query': {'type': 'string', 'description': 'Key words for searching'}}, 'required': ['query']}}}, {'type': 'function', 'function': {'name': 'jina_reader', 'description': 'A tool for scraping webpages. Input should be a URL.', 'parameters': {'type': 'object', 'properties': {'url': {'type': 'string', 'description': 'url for scraping'}, 'request_params': {'type': 'string', 'description': 'request parameters'}}, 'required': ['url']}}}, {'type': 'function', 'function': {'name': 'jina_search', 'description': 'A tool for searching results on the web for grounding. Input should be a simple question.', 'parameters': {'type': 'object', 'properties': {'query': {'type': 'string', 'description': 'simple question to ask on the web'}}, 'required': ['query']}}}, {'type': 'function', 'function': {'name': 'wikipedia_search', 'description': 'A tool for performing a Wikipedia search and extracting snippets and webpages. Input should be a search query.', 'parameters': {'type': 'object', 'properties': {'query': {'type': 'string', 'description': 'key words for searching, this should be in the language of "language" parameter'}, 'language': {'type': 'string', 'description': 'language of the wikipedia to be searched, only "de" for German, "en" for English, "fr" for French, "hi" for Hindi, "ja" for Japanese, "ko" for Korean, "pl" for Polish, "pt" for Portuguese, "ro" for Romanian, "uk" for Ukrainian, "vi" for Vietnamese, and "zh" for Chinese are supported'}}, 'required': ['query', 'language']}}}], 'top_p': 0.9, 'user': '3fbf6914-c670-4967-b1bb-6fe1b459baeb'}}
2024-07-10 09:08:18,892.892 DEBUG [Thread-24 (_generate_worker)] [_base_client.py:949] - Sending HTTP Request: POST https://xxxx.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-05-01-preview
2024-07-10 09:08:18,893.893 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - connect_tcp.started host='172.17.0.1' port=20172 local_address=None timeout=5.0 socket_options=None
2024-07-10 09:08:18,893.893 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0xffff64258f40>
2024-07-10 09:08:18,894.894 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - send_request_headers.started request=<Request [b'CONNECT']>
2024-07-10 09:08:18,894.894 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - send_request_headers.complete
2024-07-10 09:08:18,894.894 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - send_request_body.started request=<Request [b'CONNECT']>
2024-07-10 09:08:18,895.895 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - send_request_body.complete
2024-07-10 09:08:18,895.895 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - receive_response_headers.started request=<Request [b'CONNECT']>
2024-07-10 09:08:18,896.896 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'Connection established', [])
2024-07-10 09:08:18,896.896 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - start_tls.started ssl_context=<ssl.SSLContext object at 0xffff6428acc0> server_hostname='xxxx.openai.azure.com' timeout=5.0
2024-07-10 09:08:19,972.972 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0xffff64258d60>
2024-07-10 09:08:19,973.973 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - send_request_headers.started request=<Request [b'POST']>
2024-07-10 09:08:19,974.974 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - send_request_headers.complete
2024-07-10 09:08:19,974.974 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - send_request_body.started request=<Request [b'POST']>
2024-07-10 09:08:19,974.974 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - send_request_body.complete
2024-07-10 09:08:19,974.974 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - receive_response_headers.started request=<Request [b'POST']>
2024-07-10 09:08:20,659.659 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Transfer-Encoding', b'chunked'), (b'Content-Type', b'text/event-stream; charset=utf-8'), (b'cmp-upstream-response-duration', b'295'), (b'x-ms-region', b'East US'), (b'apim-request-id', b'fcec6407-50e8-4ff5-a5b0-3340d661b579'), (b'x-ratelimit-remaining-requests', b'4629'), (b'x-accel-buffering', b'no'), (b'x-ms-rai-invoked', b'true'), (b'x-envoy-upstream-service-time', b'300'), (b'x-request-id', b'109fde82-2ecf-4004-92d1-c1835b8ccc0b'), (b'ms-azureml-model-time', b'296'), (b'x-ms-client-request-id', b'fcec6407-50e8-4ff5-a5b0-3340d661b579'), (b'Strict-Transport-Security', b'max-age=31536000; includeSubDomains; preload'), (b'azureml-model-session', b'd025-20240702105120'), (b'x-content-type-options', b'nosniff'), (b'x-ratelimit-remaining-tokens', b'4606727'), (b'Date', b'Wed, 10 Jul 2024 09:08:19 GMT')])
2024-07-10 09:08:20,661.661 INFO [Thread-24 (_generate_worker)] [_client.py:1026] - HTTP Request: POST https://xxxx.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-05-01-preview "HTTP/1.1 200 OK"
2024-07-10 09:08:20,661.661 DEBUG [Thread-24 (_generate_worker)] [_base_client.py:988] - HTTP Response: POST https://xxxx.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-05-01-preview "200 OK" Headers({'transfer-encoding': 'chunked', 'content-type': 'text/event-stream; charset=utf-8', 'cmp-upstream-response-duration': '295', 'x-ms-region': 'East US', 'apim-request-id': 'fcec6407-50e8-4ff5-a5b0-3340d661b579', 'x-ratelimit-remaining-requests': '4629', 'x-accel-buffering': 'no', 'x-ms-rai-invoked': 'true', 'x-envoy-upstream-service-time': '300', 'x-request-id': '109fde82-2ecf-4004-92d1-c1835b8ccc0b', 'ms-azureml-model-time': '296', 'x-ms-client-request-id': 'fcec6407-50e8-4ff5-a5b0-3340d661b579', 'strict-transport-security': 'max-age=31536000; includeSubDomains; preload', 'azureml-model-session': 'd025-20240702105120', 'x-content-type-options': 'nosniff', 'x-ratelimit-remaining-tokens': '4606727', 'date': 'Wed, 10 Jul 2024 09:08:19 GMT'})
2024-07-10 09:08:20,661.661 DEBUG [Thread-24 (_generate_worker)] [_base_client.py:996] - request_id: 109fde82-2ecf-4004-92d1-c1835b8ccc0b
2024-07-10 09:08:20,662.662 DEBUG [Thread-24 (_generate_worker)] [_trace.py:45] - receive_response_body.started request=<Request [b'POST']>

[on_llm_before_invoke]
Model: gpt-4o
Parameters:
        temperature: 0.8
        top_p: 0.9
        presence_penalty: 0.1
        frequency_penalty: 0.1
        max_tokens: 4096
        Tools:
                google_search
                webscraper
                current_time
                bing_web_search
                github_repositories
                searxng_search
                jina_reader
                jina_search
                wikipedia_search
Stream: True
User: 3fbf6914-c670-4967-b1bb-6fe1b459baeb
Prompt messages:
        role: system
        content: You are a super AI assistant. Give priority to using Simplified Chinese when communicating with users.

        role: user
        content: hi

2024-07-10 09:08:20,678.678 INFO [Thread-22 (process_request_thread)] [_internal.py:97] - 172.26.0.9 - - [10/Jul/2024 09:08:20] "POST /console/api/installed-apps/ba63b5c3-5baa-4e30-93ba-c901e37c7189/chat-messages HTTP/1.1" 200 -
2024-07-10 09:08:21,117.117 ERROR [Thread-24 (_generate_worker)] [app_generator.py:216] - Error when generating
Traceback (most recent call last):
  File "/app/api/core/model_runtime/model_providers/__base/large_language_model.py", line 408, in _invoke_result_generator
    for chunk in result:
  File "/app/api/core/model_runtime/model_providers/azure_openai/llm/llm.py", line 380, in _handle_chat_generate_stream_response
    self._update_tool_calls(tool_calls=tool_calls, tool_calls_response=delta.delta.tool_calls)
AttributeError: 'NoneType' object has no attribute 'tool_calls'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/api/core/app/apps/agent_chat/app_generator.py", line 198, in _generate_worker
    runner.run(
  File "/app/api/core/app/apps/agent_chat/app_runner.py", line 251, in run
    self._handle_invoke_result(
  File "/app/api/core/app/apps/base_app_runner.py", line 266, in _handle_invoke_result
    self._handle_invoke_result_stream(
  File "/app/api/core/app/apps/base_app_runner.py", line 300, in _handle_invoke_result_stream
    for result in invoke_result:
  File "/app/api/core/agent/fc_agent_runner.py", line 111, in run
    for chunk in chunks:
  File "/app/api/core/model_runtime/model_providers/__base/large_language_model.py", line 432, in _invoke_result_generator
    raise self._transform_invoke_error(e)
core.model_runtime.errors.invoke.InvokeError: [azure_openai] Error: 'NoneType' object has no attribute 'tool_calls'
2024-07-10 09:08:21,122.122 DEBUG [Thread-22 (process_request_thread)] [based_generate_task_pipeline.py:60] - error: [azure_openai] Error: 'NoneType' object has no attribute 'tool_calls'

The above is the complete log of a single request. It looks like it was caused by "Failed to load YAML file"? Hope this helps.

laipz8200 commented 1 day ago

@shelltea,

This log indicates that the error occurs because delta.delta is None. This structure is from openai, but in version 1.29.0, this property is marked as not None. Could you please run docker compose down, remove all containers(maybe you can use docker container prune if you don't have other containers), and then deploy Dify again?

real-zony commented 1 day ago

@shelltea,

This log indicates that the error occurs because delta.delta is None. This structure is from openai, but in version 1.29.0, this property is marked as not None. Could you please run docker compose down, remove all containers(maybe you can use docker container prune if you don't have other containers), and then deploy Dify again?

The above operation might not work. In a newly deployed Dify, using Azure Open AI still has the same issue.

shelltea commented 1 day ago

@shelltea,

This log indicates that the error occurs because delta.delta is None. This structure is from openai, but in version 1.29.0, this property is marked as not None. Could you please run docker compose down, remove all containers(maybe you can use docker container prune if you don't have other containers), and then deploy Dify again?

After this issue occurred, I actually redeployed several times, but it still hasn't been resolved.

real-zony commented 1 day ago

Upgrading to version 0.6.13 still has this issue, it's so frustrating. This problem makes all Agents unusable. Rolling back the version results in errors and cannot be rolled back.

I'm not a Python expert, but this exception might be caused by the recent commit fix(api/model_runtime/azure/llm): Switch to tool_call.

My temporary solution is to modify the llm.py file, use the code version from b699945, and then run the docker build -t langgenius/dify-api:0.6.13 . command to build a new image. It's working fine now.

image image

@shelltea If you urgently need to use Dify, you can try the temporary solution I mentioned.

shelltea commented 1 day ago

@real-zony Thank you, I'll try your solution.

shelltea commented 23 hours ago

@real-zony I used your temporary solution to resolve this issue. Thanks again.

The method I used was slightly different; instead of rebuilding the image, I downloaded a specific version of llm.py and then mounted it into the container using the following configuration:

    volumes:
      # Mount the storage directory to the container, for storing user files.
      - ./volumes/app/storage:/app/api/storage
      - /your-path/llm.py:/app/api/core/model_runtime/model_providers/azure_openai/llm/llm.py
jumpei-tsutsui commented 11 hours ago

Hi,

I was also facing the same issue, but I resolved it in my case.

Previously, I encountered the problem of delta.delta becoming None, which was due to the asynchronous filtering support of Azure OpenAI's customized content filter. In Dify, the issue was resolved by switching the custom content filter's streaming mode from "Asynchronous Filter" to "Default" .

Asynchronous filtering is a highly useful feature, so it would be great if it were supported in Dify. However, as of now, it is not supported, so it would be a new feature.

real-zony commented 11 hours ago

Hi,

I was also facing the same issue, but I resolved it in my case.

Previously, I encountered the problem of delta.delta becoming None, which was due to the asynchronous filtering support of Azure OpenAI's customized content filter. In Dify, the issue was resolved by switching the custom content filter's streaming mode from "Asynchronous Filter" to "Default" .

Asynchronous filtering is a highly useful feature, so it would be great if it were supported in Dify. However, as of now, it is not supported, so it would be a new feature.

I think this is what I missed. I did add an async filter in my use case, which is probably why @laipz8200 couldn't reproduce the issue, right?

shelltea commented 10 hours ago

Hi,

I was also facing the same issue, but I resolved it in my case.

Previously, I encountered the problem of delta.delta becoming None, which was due to the asynchronous filtering support of Azure OpenAI's customized content filter. In Dify, the issue was resolved by switching the custom content filter's streaming mode from "Asynchronous Filter" to "Default" .

Asynchronous filtering is a highly useful feature, so it would be great if it were supported in Dify. However, as of now, it is not supported, so it would be a new feature.

I also enabled "Asynchronous Filter," which is a great feature. I don't really want to turn it off, so I used the temporary solution above to resolve the issue. Still, I hope Dify can support "Asynchronous Filter" as soon as possible.