Closed imtuyethan closed 3 weeks ago
Since there would be similar requests like this in the future, and this concerns "OpenAI-compatible endpoints". why not do it like this: Step 1: Configure a Client Connection
Since there would be similar requests like this in the future, and this concerns "OpenAI-compatible endpoints". why not do it like this: Step 1: Configure a Client Connection
- Navigate to the Jan app > Settings.
- Select the "Create Custom Model". The OpenAI fields can be used for any OpenAI-compatible API.
- Insert the API Key and the endpoint URL into their respective fields...
- Click "Save" to add your model on the list. PS. Sorry. My bad. I should have started opened a new request. Anyway, have it a thought!
@norrybul Thanks - this is a great idea and exactly how we are thinking about it.
Converted all qwen2.5 family
Problem Statement
https://www.reddit.com/r/LocalLLaMA/comments/1flkcav/qwen_25_casually_slotting_above_gpt4o_and/
Requested by: https://discord.com/channels/1107178041848909847/1110389363809976361/1287521950167466024
Users who prefer Qwen 2.5 or need its specific capabilities are unable to utilize it within Jan.
Feature Idea
Implement support for the Qwen 2.5 model in Jan. This would involve: