Closed vjeson closed 1 week ago
👀 @vjeson
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. Please make sure you have given us as much context as possible.\ 非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
🥰 Description of requirements
Is it possible to customize and add multiple api service providers of the same interface type? Now you can configure the ai proxy service address of each model interface, but only one interface of each type can be configured. For example, models of the openai interface type can only Set up a proxy address. In actual use, you can purchase openai models provided from different servers. You hope to be able to customize each model and add multiple service providers.
Now you can configure the ai proxy service address of each model interface, but only one of each type of interface can be configured. For example, for openai interface type models, you can only set one proxy address. In actual use, you can purchase the proxy address provided by different servers. openai model, hope to be able to custom add each model, multiple service providers
No response
@vjeson 你好!我是Dosu,我是一个可以帮助你解决bug,回答问题并成为贡献者的机器人。我会在人类维护者到来之前帮助你。期待能够帮到你!🤖
I found some similar discussions and issues related to your request:
Regarding your request to add multiple API service providers of the same type, it is indeed possible to customize and add multiple API service providers of the same type in LobeChat. You can achieve this by setting environment variables to control and customize the model list. For example, you can use the OPENAI_MODEL_LIST
environment variable to add, hide, or customize the display of models:
$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e OPENAI_MODEL_LIST="qwen-7b-chat,+glm-6b,-gpt-3.5-turbo" \
--name lobe-chat \
lobehub/lobe-chat
In this example:
qwen-7b-chat
is the default displayed model.+glm-6b
is an added model.-gpt-3.5-turbo
is a hidden model.LobeChat supports multiple AI service providers, including OpenAI, Azure OpenAI, Google AI, Anthropic AI, AWS Bedrock, DeepSeek AI, OpenRouter AI, TogetherAI, Ollama, Moonshot AI, Perplexity AI, Minimax AI, Mistral AI, Groq AI, ZhiPu AI, and 01 AI. Each provider has corresponding environment variable configurations that can be used to enable or disable specific providers and customize the model list.
For example, when using Azure OpenAI, you can configure it as follows:
$ docker run -d -p 3210:3210 \
-e AZURE_API_KEY=c55168be3874490ef0565d9779ecd5a6 \
-e AZURE_ENDPOINT=https://docs-test-001.openai.azure.com \
-e AZURE_API_VERSION=2024-02-01 \
-e AZURE_MODEL_LIST="gpt-35-turbo->my-deploy=GPT 3.5 Turbo" \
--name lobe-chat \
lobehub/lobe-chat
This flexibility allows you to manage multiple API service providers of the same type by configuring the environment variables accordingly [4][5][6][7][8].
To continue talking to Dosu, mention @dosu.
感觉这个可以通过自搭建api中转的形式管理更好?🤔
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
Do you think this can be better managed through self-built API transfer? 🤔
建议采用 oneapi/newapi 的方案,我不太想在 LobeChat 里加渠道管理的能力
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
It is recommended to use the oneapi/newapi solution. I don’t really want to add channel management capabilities to LobeChat.
✅ @vjeson
This issue is closed, If you have any questions, you can comment and reply.\ 此问题已经关闭。如果您有任何问题,可以留言并回复。
🥰 需求描述
能否实现自定义添加同一个接口类型的多个api服务提供商, 现在可以配置每个模型接口ai代理服务地址,但是每种类型的接口只能配置一个,比如openai接口类型的模型,只能设置一个代理地址,实际使用中,可以购买了从不同的服务器提供的openai模型,希望能够自定义添加每种模型,多个服务提供商
🧐 解决方案
现在可以配置每个模型接口ai代理服务地址,但是每种类型的接口只能配置一个,比如openai接口类型的模型,只能设置一个代理地址,实际使用中,可以购买了从不同的服务器提供的openai模型,希望能够自定义添加每种模型,多个服务提供商
📝 补充信息
No response