langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
51.21k stars 7.38k forks source link

WenXin ERNIE model does not support fuction calling #7259

Closed Devi1k closed 1 month ago

Devi1k commented 2 months ago

Self Checks

Dify version

cloud

Cloud or Self Hosted

Cloud

Steps to reproduce

In the studio agent mode, configure the Wenxin model to conduct a tool invocation test with the query, "苹果最新的手机是什么" and display the results:[wenxin] Bad Request Error, function calling is not supported yet.

✔️ Expected Behavior

Call the Google search tool normally and return:

已使用 google_search 苹果最新的手机是 iPhone 15 Pro 和 iPhone 15 Pro Max。这两款手机在2023年发布,采用了新的钛金属中框,重量更轻,并且在摄像头方面进行了显著升级,主摄像头为4800万像素,支持2400万像素的超高默认分辨率。

如果你想了解更多详细信息,可以查看以下链接:

苹果发布iPhone 15 Pro 和iPhone 15 Pro Max 苹果2023年新品汇总

❌ Actual Behavior

[wenxin] Bad Request Error, function calling is not supported yet.

dosubot[bot] commented 2 months ago

@Devi1k Hello there! I'm here to assist you with bug fixes, answer questions, and guide you on becoming a contributor. Let's work together to resolve any issues you may have.

The Wenxin ERNIE model currently does not support function calling in Dify. This is why you are encountering the "[wenxin] Bad Request Error, function calling is not supported yet" message when trying to invoke the Google search tool [1][2].

There are plans to support function calling for the Wenxin model in future versions of Dify, as indicated by a placeholder comment in the code [1].

In the meantime, you can use alternative models that support function calling, such as OpenAI, Anthropic, Azure OpenAI, Minimax, Moonshadow, deepseek, Xinference, Zhipu AI, Stepfun, and Baichuan [3][4][5][6].

To configure and use a different model in Dify's studio agent mode that supports function calling, follow these steps:

  1. Add and Configure the Model Provider:

    • Navigate to Settings --> Model Providers to add and configure the LLM you are going to use.
    • Obtain the API key from the model provider's official website before using it in Dify.
  2. Select the Reasoning Model:

    • Navigate to the section where you can select the reasoning model for the Agent Assistant.
    • Choose a model that supports function calling, such as GPT-3.5 or GPT-4.
  3. Configure Agent Settings:

    • Go to the Agent settings.
    • Ensure that the inference mode is set to "Function Calling."
  4. Add Tools (if necessary):

    • In the "Tools" section, add any required tools that the Agent Assistant might need to enhance its capabilities.
  5. Write Prompts:

    • In the 'Instructions' section, write clear prompts for the Agent Assistant.
  6. Debug and Preview:

    • Use the debugging and preview options to test its effectiveness in completing tasks.
  7. Publish the Application:

    • Once satisfied with the configuration and performance, publish the application as a web app [7][8][9].

To continue talking to Dosu, mention @dosu.

crazywoola commented 2 months ago

https://github.com/langgenius/dify/blob/3e63abd3352832c0494a07a7660800672f5fd1bd/api/core/model_runtime/model_providers/openai/llm/gpt-3.5-turbo-16k-0613.yaml#L6-L9

As you can see, weinxin LLM doesn't support those features.

If you are interested in contributing to this feature, you can open a pull request then.