janhq / models

Models support in Jan and Cortex
MIT License
6 stars 2 forks source link

feat: Support Qwen 2.5 #23

Closed imtuyethan closed 3 weeks ago

imtuyethan commented 2 months ago

Problem Statement

https://www.reddit.com/r/LocalLLaMA/comments/1flkcav/qwen_25_casually_slotting_above_gpt4o_and/

Requested by: https://discord.com/channels/1107178041848909847/1110389363809976361/1287521950167466024

Users who prefer Qwen 2.5 or need its specific capabilities are unable to utilize it within Jan.

Feature Idea

Implement support for the Qwen 2.5 model in Jan. This would involve:

norrybul commented 1 month ago

Since there would be similar requests like this in the future, and this concerns "OpenAI-compatible endpoints". why not do it like this: Step 1: Configure a Client Connection

  1. Navigate to the Jan app > Settings.
  2. Select the "Create Custom Model". The OpenAI fields can be used for any OpenAI-compatible API.
  3. Insert the API Key and the endpoint URL into their respective fields...
  4. Click "Save" to add your model on the list. PS. Sorry. My bad. I should have started opened a new request. Anyway, have it a thought!
dan-homebrew commented 1 month ago

Since there would be similar requests like this in the future, and this concerns "OpenAI-compatible endpoints". why not do it like this: Step 1: Configure a Client Connection

  1. Navigate to the Jan app > Settings.
  2. Select the "Create Custom Model". The OpenAI fields can be used for any OpenAI-compatible API.
  3. Insert the API Key and the endpoint URL into their respective fields...
  4. Click "Save" to add your model on the list. PS. Sorry. My bad. I should have started opened a new request. Anyway, have it a thought!

@norrybul Thanks - this is a great idea and exactly how we are thinking about it.

hahuyhoang411 commented 1 month ago

Converted all qwen2.5 family

https://huggingface.co/cortexso/qwen2.5

Screenshot 2024-10-26 at 21 48 01