langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
53.59k stars 7.85k forks source link

Support llm configuration for prompt generation etc. #10079

Open HeMuling opened 1 month ago

HeMuling commented 1 month ago

Self Checks

1. Is this request related to a challenge you're experiencing? Tell me about your story.

I've noticed that some part of the functions that require llm (like prompt generating and Q&A part in knowledge base) do not support the configuration of llm (like choosing model, setting max token), this caused some problems. For example, in prompt generation, I often encountered cases when the prompt is incomplete (caused by too little max token), and also, the default model for prompt generation and Q&A seems to be gpt-4o, while I would like to use gpt-4o-mini.

Therefore, I hope there is a llm config in the setting part for the case I mentioned above.

2. Additional context or comments

No response

3. Can you help us with this feature?

dosubot[bot] commented 2 days ago

Hi, @HeMuling. I'm Dosu, and I'm helping the Dify team manage their backlog. I'm marking this issue as stale.

Issue Summary:

Next Steps:

Thank you for your understanding and contribution!