Closed qdyujiang002 closed 3 weeks ago
Hello @qdyujiang002! I'm here to assist you with any bugs, questions, or contributions. Let's work together to solve this issue.
I found a similar closed issue that might be relevant to your problem. The issue titled Workflow - Error was resolved by pulling the v0.8.0 release again, as the previous release was deleted but the version number remained the same [1].
Regarding your specific issue with the stream API not responding with the 'message' event when using certain nodes like 'Template node' or 'Iteration node' after the 'LLM node', this behavior occurs because these nodes do not inherently support streaming output. To achieve streaming output after each iteration, you need to configure a Direct Reply Node inside the iteration node.
Supporting evidence from the documentation:
"Use an Iteration Node to wrap an LLM Node and generate content for each chapter through multiple iterations. Add a Direct Answer Node inside the iteration node to achieve streaming output after each iteration" [2].
Ensure that you have configured the Direct Reply Node correctly within the Iteration Node to handle the output of each iteration step [3].
To continue talking to Dosu, mention @dosu.
This problem occurs on 'LLM node' whenever there is a 'Template node' or other specific node after the LLM node, no matter how many nodes are separated in between
I've encountered a similar issue, but my problem is exactly the opposite, where two unexpected messages are output immediately after the first node ends.
Is this issue still present in the latest version (0.8.3)? I'd love to hear your feedback.
Is this issue still present in the latest version (0.8.3)? I'd love to hear your feedback.
yes, it still present in the newest version.
After some testing, I believe there are a few key points to share:
I hope this helps resolve your issue!
After some testing, I believe there are a few key points to share:
- If the output of the LLM Node is not used in the Template Node, the stream of the Answer Node will not be blocked. You can create a minimal workflow to verify this.
- If there is an Iteration Node before the Answer Node, it will be blocked. This is because the Iteration Node can also contains the Answer Node, and we currently do not have a way to determine this.
- If you don't want the Iteration Node to block unrelated Answer Nodes, you can place the Answer Node before the Iteration Node.
I hope this helps resolve your issue!
Sorry, my expression may be inaccurate. The Node stream that is blocked is not the Answer Node, but the streaming of LLM Node itself. The overall meaning is that if LLM Node exists in a workflow and there are other nodes behind LLM Node, For example, in the case of Template Node, when the stream api of this workflow is executed to LLM Node, the steam of LLM Node itself is blcok, instead of the final Answer node is blocked.
After some testing, I believe there are a few key points to share:
- If the output of the LLM Node is not used in the Template Node, the stream of the Answer Node will not be blocked. You can create a minimal workflow to verify this.
- If there is an Iteration Node before the Answer Node, it will be blocked. This is because the Iteration Node can also contains the Answer Node, and we currently do not have a way to determine this.
- If you don't want the Iteration Node to block unrelated Answer Nodes, you can place the Answer Node before the Iteration Node.
I hope this helps resolve your issue!
Sorry, my expression may be inaccurate. The Node stream that is blocked is not the Answer Node, but the streaming of LLM Node itself. The overall meaning is that if LLM Node exists in a workflow and there are other nodes behind LLM Node, For example, in the case of Template Node, when the stream api of this workflow is executed to LLM Node, the steam of LLM Node itself is blcok, instead of the final Answer node is blocked.
Our message event is generated by the whole workflow, not the output from the LLM Node.
After some testing, I believe there are a few key points to share:
- If the output of the LLM Node is not used in the Template Node, the stream of the Answer Node will not be blocked. You can create a minimal workflow to verify this.
- If there is an Iteration Node before the Answer Node, it will be blocked. This is because the Iteration Node can also contains the Answer Node, and we currently do not have a way to determine this.
- If you don't want the Iteration Node to block unrelated Answer Nodes, you can place the Answer Node before the Iteration Node.
I hope this helps resolve your issue!
Sorry, my expression may be inaccurate. The Node stream that is blocked is not the Answer Node, but the streaming of LLM Node itself. The overall meaning is that if LLM Node exists in a workflow and there are other nodes behind LLM Node, For example, in the case of Template Node, when the stream api of this workflow is executed to LLM Node, the steam of LLM Node itself is blcok, instead of the final Answer node is blocked.
Our message event is generated by the whole workflow, not the output from the LLM Node.
Oh I see, thanks
Self Checks
Dify version
0.8.0
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
✔️ Expected Behavior
when stream coming to LLM node, then it should be response with event type 'message' between event type 'node_start' and 'node_finish' like this:
❌ Actual Behavior
When chatbot-workflow using certain node(such like 'Template node' or 'Iteration node') after 'LLM node', then the stream api response coming to LLM node has no event type 'message', only have event type 'node_start' and 'node_finish', the stream response the LLM answer give all the answers at once in event type 'node_finish' like this: