Open takestairs opened 1 month ago
Bot detected the issue body's language is not English, translate it automatically.
Title: [Feature Request]: Select the request format based on the model service provider + model name, rather than simply relying on the model name
There are the following scenarios: Get the gemini-pro model of the openai request format through one-api, that is, the request format is similar:
POST {{one_base}}/v1/chat/completions
Content-Type: application/json
Authorization: Bearer {{one_key}}
{
"model": "gemini-pro",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "hello"
}
],
"stream": true
}
If you fill in one_base as the custom endpoint of openai, fill in one_key, and the custom model gemini-pro, you will still be prompted to configure Google as the model service provider.
I believe there are many similar situations. In fact, if you transfer the API through one_api, you can easily implement multi-endpoint and multi-key polling using the openai API client. Just make sure that the API format that makes the request is not simply distinguished by the model name.
My solution to this problem is:
model service provider + model name
to uniquely describe the request format allowed by a model.model name (service provider)
selected by the userNo response
No response
可以one-api里设置别名,next-web里再设置回显示名。
Bot detected the issue body's language is not English, translate it automatically.
You can set the alias in one-api, and then set the display name back in next-web.
Problem Description
有如下情景: 经过 one-api 得到 openai 请求格式的 gemini-pro 模型,即请求格式类似:
如果填入one_base作为 openai 的自定义端点,填入one_key、自定义模型gemini-pro,仍然提示需要配置 Google 作为模型服务商。
类似的情景我相信还有很多。其实如果通过one_api来中转api,这边以 openai API 的客户端,完全可以轻松地实现多端点、多key的轮询。只需要让发起请求的api格式不简单地通过模型名称来分别。
Solution Description
对于这个问题,我的解决思路是:
模型服务商+模型名
,唯一描述一个模型允许的请求格式模型名(服务商)
确定模型名和请求格式(OpenAI/Google/Azure等)Alternatives Considered
No response
Additional Context
No response