langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
46.76k stars 6.61k forks source link

chore: massive update of the Gemini models based on latest documentation #8822

Closed CXwudi closed 2 days ago

CXwudi commented 2 days ago

Checklist:

[!IMPORTANT]
Please review the checklist below before submitting your pull request.

Description

Fixes #8821.

Also, based on the documentation, "Latest" and "Latest Stable" models are two different versions. Hence, I also separated the gemini-1.5-pro away from gemini-1.5-pro-latest. Likewise for flash models.

Type of Change

Testing Instructions

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

Screenshot 2024-09-26 120133 Screenshot 2024-09-26 122123 Screenshot 2024-09-26 122248 Screenshot 2024-09-26 125204

crazywoola commented 2 days ago

Hello can you take a look at this feature as well? https://github.com/langgenius/dify/issues/8805

CXwudi commented 2 days ago

Hello can you take a look at this feature as well? #8805

Hi, I could try, but I am not familiar with Dify codebase, so please don't bet on me.

CXwudi commented 2 days ago

@crazywoola About the request of https://github.com/langgenius/dify/issues/8805, are you referring to https://github.com/langgenius/dify/pull/8721 which simply just remove the harm category setting, but this time you want me to do it for Gemini endpoints?

If that so, I believe that is simply just removing:

https://github.com/langgenius/dify/blob/main/api/core/model_runtime/model_providers/google/llm/llm.py#L209-L214

Correct? If so, then I think I can do that