langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
47.9k stars 6.83k forks source link

the env variable is wrong in the loop #9056

Open andeyeluguo opened 5 days ago

andeyeluguo commented 5 days ago

Self Checks

Dify version

0.8.3

Cloud or Self Hosted

Self Hosted (Source)

Steps to reproduce

after I talk to the robot with 'hi', it should reply with me '[hi][hi][hi][hi]', but it reply with me '[][][][]' , I don't know if the bug has been fixed by the new version. the older version is right. loop_error.zip

✔️ Expected Behavior

[hi][hi][hi][hi]

❌ Actual Behavior

[][][][]

dosubot[bot] commented 5 days ago

Hello @andeyeluguo! 👋 I'm here to help you with bug fixes, answer questions, and even guide you on how to become a contributor. Let's tackle this together!

The issue you're experiencing with the response formatting in version 0.8.3 might be related to changes in the response generation logic or configuration settings. Here are a few things you can check:

  1. Response Generation Logic: Ensure that the AppGenerateService class and its associated generators are correctly configured for your app mode. The response generation logic uses different generators based on the app mode, such as CompletionAppGenerator or AgentChatAppGenerator [1].

  2. Configuration Settings: If you're using JSON Schema for response formatting, verify that the response_format is set correctly to json_schema in your configuration. This setting can affect how responses are structured [2].

  3. Recent Changes: Check if there have been any recent changes to the AgentChatAppGenerator or related components that might impact response formatting. The AgentChatAppGenerateResponseConverter class handles response formatting and might have updates that affect output [3][4].

  4. Testing: Run the specific tests related to response generation to identify any issues. Tests like test_invoke_chat_model and test_invoke_stream_chat_model can help verify if the response generation is functioning as expected [5].

If these steps don't resolve the issue, consider reviewing the detailed implementation of the response generation logic and any recent changes in the codebase.

To continue talking to Dosu, mention @dosu.