langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
50.77k stars 7.3k forks source link

need support hunyuan-turbo #8371

Closed xiandan-erizo closed 1 month ago

xiandan-erizo commented 1 month ago

Self Checks

1. Is this request related to a challenge you're experiencing? Tell me about your story.

Yes, I need to use the Hunyuan-Turbo model, but it is currently not supported.

2. Additional context or comments

No response

3. Can you help us with this feature?

dosubot[bot] commented 1 month ago

Hey @xiandan-erizo! 👋 I'm here to help you with any bugs, questions, or contributions you might have regarding the Hunyuan-Turbo model. I'm an AI bot and I can assist you while you wait for a human maintainer. Let me know how I can help!

I found some relevant information that might help you with integrating the Hunyuan-Turbo model into Dify.

To support a new model like Hunyuan-Turbo, the specific requirements or dependencies include:

  1. Model Type: The model should be of type llm.
  2. Features: The model should support the following features:
    • agent-thought
    • tool-call
    • multi-tool-call
    • stream-tool-call
  3. Model Properties:
    • Mode: chat
    • Context Size: 256000
  4. Parameter Rules:
    • temperature: Use the template temperature.
    • top_p: Use the template top_p.
    • max_tokens: Use the template max_tokens, with a default of 1024, minimum of 1, and maximum of 256000.
    • enable_enhance: A boolean parameter to enable enhancement features like external search, with a default value of true. This feature allows the model to perform external searches to enhance the generation results, which can reduce response latency, especially for the initial token in streaming outputs. However, in some scenarios, the response quality might decrease when this feature is disabled [1][2][3].

Additionally, Dify has a modular architecture that facilitates the addition of new models. The "Model Runtime" module provides an interface for invoking and authenticating various models, decoupling models from upstream and downstream processes, and allowing for horizontal expansion for developers. This architecture supports the addition of new models by defining them in the backend without needing to modify frontend logic [4][5].

For detailed steps on how to add a new provider and models, you can refer to the documentation here [6].

I hope this helps! If you need further assistance, feel free to ask.

To continue talking to Dosu, mention @dosu.

xiandan-erizo commented 1 month ago

pr link https://github.com/langgenius/dify/pull/8372

xiandan-erizo commented 1 month ago

8372