ChatGPTNextWeb / ChatGPT-Next-Web

A cross-platform ChatGPT/Gemini UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT/Gemini 应用。
https://app.nextchat.dev/
MIT License
72.55k stars 57.62k forks source link

[Feature Request]: 基于模型服务商+模型名选择请求格式,而不是简单地依靠模型名 #4804

Open takestairs opened 1 month ago

takestairs commented 1 month ago

Problem Description

有如下情景: 经过 one-api 得到 openai 请求格式的 gemini-pro 模型,即请求格式类似:

POST {{one_base}}/v1/chat/completions
Content-Type: application/json
Authorization: Bearer {{one_key}}

{
  "model": "gemini-pro",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "hello"
    }
  ],
  "stream": true
}

如果填入one_base作为 openai 的自定义端点,填入one_key、自定义模型gemini-pro,仍然提示需要配置 Google 作为模型服务商。

image

类似的情景我相信还有很多。其实如果通过one_api来中转api,这边以 openai API 的客户端,完全可以轻松地实现多端点、多key的轮询。只需要让发起请求的api格式不简单地通过模型名称来分别。

Solution Description

对于这个问题,我的解决思路是:

  1. 区分自定义模型设置,各个模型服务商可以配置对应的自定义模型
  2. 使用模型服务商+模型名,唯一描述一个模型允许的请求格式
  3. 现有的识图功能可以基于模型名进行区分,模型服务商仅区别请求格式
  4. 基于用户选定的 模型名(服务商) 确定模型名和请求格式(OpenAI/Google/Azure等)

Alternatives Considered

No response

Additional Context

No response

Issues-translate-bot commented 1 month ago

Bot detected the issue body's language is not English, translate it automatically.


Title: [Feature Request]: Select the request format based on the model service provider + model name, rather than simply relying on the model name

Problem Description

There are the following scenarios: Get the gemini-pro model of the openai request format through one-api, that is, the request format is similar:

POST {{one_base}}/v1/chat/completions
Content-Type: application/json
Authorization: Bearer {{one_key}}

{
  "model": "gemini-pro",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "hello"
    }
  ],
  "stream": true
}

If you fill in one_base as the custom endpoint of openai, fill in one_key, and the custom model gemini-pro, you will still be prompted to configure Google as the model service provider.

image

I believe there are many similar situations. In fact, if you transfer the API through one_api, you can easily implement multi-endpoint and multi-key polling using the openai API client. Just make sure that the API format that makes the request is not simply distinguished by the model name.

Solution Description

My solution to this problem is:

  1. Differentiate custom model settings. Each model service provider can configure corresponding custom models.
  2. Use model service provider + model name to uniquely describe the request format allowed by a model.
  3. The existing image recognition function can distinguish based on the model name, and the model service provider only distinguishes the request format.
  4. Determine the model name and request format (OpenAI/Google/Azure, etc.) based on the model name (service provider) selected by the user

Alternatives Considered

No response

Additional Context

No response

GrayXu commented 1 month ago

可以one-api里设置别名,next-web里再设置回显示名。

Issues-translate-bot commented 1 month ago

Bot detected the issue body's language is not English, translate it automatically.


You can set the alias in one-api, and then set the display name back in next-web.