Open HeMuling opened 1 month ago
Hi, @HeMuling. I'm Dosu, and I'm helping the Dify team manage their backlog. I'm marking this issue as stale.
Issue Summary:
gpt-4o-mini
over the default gpt-4o
.Next Steps:
Thank you for your understanding and contribution!
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
I've noticed that some part of the functions that require llm (like prompt generating and Q&A part in knowledge base) do not support the configuration of llm (like choosing model, setting max token), this caused some problems. For example, in prompt generation, I often encountered cases when the prompt is incomplete (caused by too little max token), and also, the default model for prompt generation and Q&A seems to be
gpt-4o
, while I would like to usegpt-4o-mini
.Therefore, I hope there is a llm config in the setting part for the case I mentioned above.
2. Additional context or comments
No response
3. Can you help us with this feature?