Open ImYrS opened 7 months ago
Hi ImYrS, I understand the reasoning behind this change. Is this something you'd like to contribute to? This has to do with the proxy
service.
Hi, glad to hear from you. But I'm sorry I don't have time to contribute at the moment.
My personal understanding is that this doesn't really require a change proxy
service, maybe add some ENV vars for API Endpoint or something. And allow custom input model name.
Since I haven't read the code of this project completely, my understanding may be wrong, please point out any problems, thank you very much!
I'd also like to see this. Being able to set a local AI endpoint is very important to me.
Contributions for this are welcome. At this point, I don't have the time to dedicate to building this feature. Sorry.
@ImYrS & @PyrokineticDarkElf can you briefly describe your use case of different models and API base URL? I started looking at it and I want to make sure I am able to provide a solution that solves this need.
For the base URL, I think a simple env variable can suffice. For the models we'll need a different solution
I think using an env var should work for my needs. My use case is just to use a local LLM server (Ollama, LM Studio etc) Rather than an online provider.
@PyrokineticDarkElf @ImYrS As you can see, there's a PR to address this issue. Please take a look and confirm this addresses your issue before I merge it
Proposal
Use-Case
For self-hosted, some people needs to set another OpenAI Endpoint. Like in China,
api.openai.com
is blocked.And more, many user use a project called
one-api
to handle many models from different providers. That project can converts that models to OpenAI-compatible API, so users can use the models not from OpenAI or Claude to test prompt.In summary, I wish these two features can be developed, and I think it is useful for many users.
Is this a feature you are interested in implementing yourself?
Maybe