Open jens-ghc opened 3 months ago
Hi there, if you're looking for alternative solutions for local servers with local models, you might want to check out Cortex (https://github.com/janhq/cortex) for headless AI operations. I appreciate you taking the time to report this. Let us know if you have any other questions! 😁
@Van-QA thanks for looking into this. This feature request is not about how to run local models - in our setup the OpenAI compatible serving is the standardized way to access models. The feature request was really just about having the ability to extend the Jan UI to allow us to run another OpenAI endpoint with a custom URL, so we don't have to manually swap out the url of the "official" OpenAI model in Jan all the time.
Related #3773
Is your feature request related to a problem? Please describe it
I'm switching between using OpenAI and a local open AI compatible endpoints a lot. Since swapping out the base url whenever I switch is tedious, I was thinking of using one of the other endpoints such as https://jan.ai/docs/remote-models/openrouter. But According to the manual at https://jan.ai/docs/remote-models/generic-openai we should use the OpenAI server.
Describe the solution
Would it be possible to add another Server to the configurations panel for an OpenAI compatible endpoint(s)? Ideally allowing us to give it a name (so it's easier to tell in the chat what model is used). But even a generic name would be sufficient right now. This way the user can tell in the chats if they are actually talking to OpenAI or a local model.
Note: This might be related to https://github.com/janhq/jan/issues/2840 but felt different enough to open another request
Teachability, documentation, adoption, migration strategy
No response
What is the motivation / use case for changing the behavior?
Reduce manual steps needed to switch urls frequently. Increase usability since the chats will actually tell the user that the conversation is not with a real OpenAI model