runpod-workers / worker-vllm

The RunPod worker template for serving our large language model endpoints. Powered by vLLM.
MIT License
213 stars 82 forks source link

Issue: Update VLLM to Version .5.0++, and a few suggestions #83

Open nerdylive123 opened 1 month ago

nerdylive123 commented 1 month ago

Description

  1. 🌟 Upgrade VLLM: We need to rocket VLLM to version 0.5.0++ or beyond! 🚀
  2. 🤖 Tensorize Awesomeness: The tensorize feature is like giving VLLM a turbo boost. 🏎️ Check out the Tensorize VLLM example for a sneak peek.
    • 🚀 It lets us load the model during download (but remember, the model needs a little conversion magic).
  3. 📦 Pip It Up: Why build VLLM from scratch when we can summon it with a pip package? Efficiency, my friend! 🧙‍♂️

Kudos to the stellar maintainer! 🌟🙌

FrederikHandberg commented 1 month ago

+1! I really would like to run Phi3VForCausalLM


Sapessii commented 1 month ago

+1!

shivanker commented 1 month ago

+1, Gemma 2 support has been recently rolled out in vLLM!

avacaondata commented 1 month ago

+1, it would make much more sense to pip install vllm so that when a new model is released and implemented in vLLM it is automatically integrated in this worker @alpayariyak

d4rk6un commented 1 month ago

Are there any plans to upgrade the VLLM version and if so, can you provide a date?

PhoenixSmaug commented 1 month ago

+1, then we could finally run DeepSeek-Coder v2

harshal-pr commented 1 month ago

+1

Llama 3.1 needs 0.5.3 https://github.com/vllm-project/vllm/releases/tag/v0.5.3

Can we upgrade this worker to support this out of box in runpod serverless vllm ?

Lhemamou commented 1 month ago

waiting also for the update :) let me know if i can help !

alpayariyak commented 1 month ago

Hi all, thank you so much for the suggestions! I've joined a different company, so @pandyamarut will be taking over. It's been a great pleasure serving you all!

Lhemamou commented 1 month ago

I wish you an amazing next work experience ;) welcome aboard @pandyamarut !

pandyamarut commented 1 month ago

Working on it ,Sorry for the delay. Thanks for maintaining the repo @alpayariyak

TheAlexPG commented 3 weeks ago

Guys, do we know anything about the approximate time frame for the update? So that you can somehow plan the update of the models in the roadmap. Thanks

nerdylive123 commented 2 weeks ago

image Pls support new quantization fp8, refer to this docs: vllm docs

I've got a whole new menu with a bunch of new options i guess its all of the arguments thats very great thank you for the update staffs and maintainers! just the options value needs to be updated :)