Open KinWang130 opened 3 weeks ago
@KinWang130 你好! 我是Dosu,一个可以帮助你解决 bug、回答问题并成为贡献者的机器人。欢迎来到这里,我会在等待人类维护者的过程中帮助你。期待能解决你的问题!🤖
To resolve the issue with the "Suggested questions after answer" feature in the Wenxin Yiyan or Zhipu AI models, you need to ensure that the temperature
parameter is set within the valid range specified in the configuration. For Wenxin Yiyan and Zhipu AI, the valid range for the temperature
parameter is between 0.1 (inclusive) and 1.0 (inclusive).
Here is the relevant part of the configuration for Wenxin Yiyan:
parameter_rules:
- name: temperature
use_template: temperature
default: 0.95
min: 0.1
max: 1.0
help:
en_US: Sampling temperature, controls the randomness of the output, must be a positive number. The value range is [0.1,1.0], which cannot be equal to 0. The default value is 0.95. The larger the value, the more random and creative the output will be; the smaller the value, the output will be more stable or certain. It is recommended that you adjust the top_p or temperature parameters according to the application scenario, but do not adjust both parameters at the same time.
To fix the issue, ensure that the temperature
parameter is set to a value between 0.1 and 1.0. For example, you can set it to the default value of 0.95:
temperature: 0.95
Additionally, ensure that the suggested_questions_after_answer
configuration is correctly set up. According to the SuggestedQuestionsAfterAnswerConfigManager
class, the configuration should be a dictionary with an enabled
key of boolean type:
class SuggestedQuestionsAfterAnswerConfigManager:
@classmethod
def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]:
if not config.get("suggested_questions_after_answer"):
config["suggested_questions_after_answer"] = {
"enabled": False
}
if not isinstance(config["suggested_questions_after_answer"], dict):
raise ValueError("suggested_questions_after_answer must be of dict type")
if "enabled" not in config["suggested_questions_after_answer"] or not \
config["suggested_questions_after_answer"]["enabled"]:
config["suggested_questions_after_answer"]["enabled"] = False
if not isinstance(config["suggested_questions_after_answer"]["enabled"], bool):
raise ValueError("enabled in suggested_questions_after_answer must be of boolean type")
return config, ["suggested_questions_after_answer"]
Ensure that your configuration follows this structure to avoid any validation errors:
suggested_questions_after_answer:
enabled: true
This should prevent the ValueError
due to parameter validation [1][2][3][4][5].
To continue talking to Dosu, mention @dosu.
However, the temperature range of the ERNIE Bot API is (0,1], which is greater than 0. The code of Dify is set to 0. Therefore, the verification always fails
response = model_instance.invoke_llm( prompt_messages=prompt_messages, model_parameters={ "max_tokens": 256, "temperature": 0.1 }, stream=False )
Self Checks
Dify version
0.6.11
Cloud or Self Hosted
Self Hosted (Source)
Steps to reproduce
When the system reasoning model uses Wenxin Yiyan or Zhipu AI, the next step problem suggestion function cannot be used, and the system reports an error. This is because the code has added parameter validation, in which the temperature parameter is set to 0 and cannot be less than the minimum value of the model. However, the temperature range of Wenxin Yiyan and Zhipu AI is (0, 1.0]. Moreover, the minimum value for Wenxin Yiyan is configured as 0.1.
The specific error and verification are as follows:
✔️ Expected Behavior
I hope the default value of temperature in api.core.llm_generator.llm_generator.LLMGenerator.generate_suggested_questions_after_answer can be modified to 0.1 or other methods can be used to fix this bug.
The goal is to ensure that the "Next Question Suggestion" function can work normally.
response = model_instance.invoke_llm( prompt_messages=prompt_messages, model_parameters={ "max_tokens": 256, "temperature": 0.1 }, stream=False )
❌ Actual Behavior
ValueError: Model Parameter temperature should be greater than or equal to 0.1.