oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
40.58k stars 5.31k forks source link

Add support to vLLM inference engine - to possibly gain x10 speedup in inference #2785

Closed ofirkris closed 1 year ago

ofirkris commented 1 year ago

vLLM is an open-source LLM inference and serving library that accelerates HuggingFace Transformers by 24x and powers Vicuna and Chatbot Arena.

Blog post: https://vllm.ai/ Repo: https://github.com/vllm-project/vllm

Slug-Cat commented 1 year ago

If the performance claims aren't overcooked or super situational, this could be huge

CamiloMM commented 1 year ago

AI is where you have some of the brightest minds in the world working on some of the most complicated maths and somehow someone just comes and does something like this (assuming it's real).

Are we in an "AI summer"? 😂

Ph0rk0z commented 1 year ago

It's Exlllama for everything else.. and can just have a new loader added.

tensiondriven commented 1 year ago

vLLM only speeds up 24x for running full fat models with massive parallelization, so if you need to run 100 inferences at the same time, its fast. But for most people, exllama is still faster/better. @turboderp has some good insights on the local llama reddit.

Unless someone is feeling ambitious, I think this could be closed. The issue poster probably didnt understand what vLLM is really for.

Ph0rk0z commented 1 year ago

Does tensor parallel help multi-gpu? And with the multi-user support this might actually serve the intended purpose.

cibernicola commented 1 year ago

Does anyone know anything about this? image

turboderp commented 1 year ago

I'm not sure how they arrive at those results. Plain HF Transformers can be mighty slow, but you have to really try to make it that slow, I feel. As for vLLM, it's not for quantized models, and as such it's quite a bit slower than ExLllama (or Llama.cpp with GPU acceleration for that matter.) If you're deploying a full-precision model to serve inference to multiple clients it might be very useful, though.

github-actions[bot] commented 1 year ago

This issue has been closed due to inactivity for 30 days. If you believe it is still relevant, please leave a comment below.

yhyu13 commented 11 months ago

@oobabooga

https://github.com/oobabooga/text-generation-webui/pull/4794#issuecomment-1837714017

As we do not consider adding new model loader for single mode, we should consider vllm now, as it is freqently support newly release models like Qwen, with both multi-client servering and quantization (AWQ) https://github.com/vllm-project/vllm

rafa-9 commented 10 months ago

@oobabooga is this on the roadmap?

nonetrix commented 9 months ago

Seems it's not coming for now at least https://github.com/oobabooga/text-generation-webui/pull/4860

fblgit commented 9 months ago

This should be re-considered, the concerns of plaguing the codebase with CUDA dependants is true.. we should address the design constraints to make this happen and not close the door entirely to something that potentially can benefit ooga's tool. I guess you could serve externally an OpenAI format from a VLLM model and override such thing at ooga's side. It could be merely a different script with different requirements to hack this up?

@oobabooga what could be the acceptance criteria? I do feel very handy serve/eval/play at the same time in a friendly eco like ooga's.

micsama commented 6 months ago

VLLM has gradually introduced support for GPTQ and AWQ models, with imminent plans to accommodate the as-yet-unmerged QLORA and QALORA models. Moreover, the acceleration effects delivered by VLLM are now strikingly evident. Given these developments, I propose considering the incorporation of VLLM support. The project is rapidly evolving and poised for a promising future.

eigen2017 commented 6 months ago

+1 for vllm vllm now becomes the first choice when we need LLM serves on line. it's not only a IT ditributed for more throughput thing, but also accelerated on batch=1. it has flash attention\ page attention... some how i found someone here has misunderstandings of that "paralleling is only for more tps, not for batch=1", high parallel, or as parallel as you can , is good for batch=1 according to cuda design.

eigen2017 commented 6 months ago

for example , vllm manages all the tokens' kv caches in to blocks, it can be faster even batch is 1.

KnutJaegersberg commented 6 months ago

yeah vLLM support should be added.