vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.69k stars 4.48k forks source link

Feature request:support ExLlama #296

Open alanxmay opened 1 year ago

alanxmay commented 1 year ago

ExLlama (https://github.com/turboderp/exllama)

It's currently the fastest and most memory-efficient executor of models that I'm aware of.

Is there an interest from the maintainers in adding this support?

SinanAkkoyun commented 1 year ago

How do you plan on adding batched support for Exllama? I am very interested in your approach as I am trying to work on that too

iibw commented 10 months ago

ExLlamaV2 has taken over ExLlama in quantization performance for most cases. I hope we can get it implemented in vLLM because it is also an incredible quantization technique. Benchmarks between all the big quantization techniques indicate ExLlamaV2 is the best out of all of them. Have there been any new developments since it was added to the roadmap?

SinanAkkoyun commented 10 months ago

Please, having exllamav2 with paged attention and with continuous batching would be a big win for the LLM world

DaBossCoda commented 10 months ago

Also looking forward to exllamav2 support

RuntimeRacer commented 10 months ago

I was hoping this would be possible, too. I recently worked with the Mixtral-8x7b Model; AWQ 4-bit had significant OOM / Memory overhead compared to ExLlama2 in 4-Bit; also I ended up just running the model in 8-bit using ExLlama2, since that turned out to be the best compromise between model capabilities and VRAM footprint. I can run it in 8-bit on 3x3090 and use full 32k context with ExLlama2; but I need 4x3090 to be even able to load it in 16-bit within VLLM; and i reach OOM when I try to use full context.

So this would definitely be an amazing addition to have more flexibility in terms of VRAM-Resources.

theobjectivedad commented 10 months ago

+1

tolecy commented 10 months ago

+1

chricro commented 8 months ago

+1

agahEbrahimi commented 8 months ago

+1

a-creation commented 8 months ago

+1

rjmehta1993 commented 7 months ago

This will be the biggest release for vllm to support exllamav2. +1

sapountzis commented 6 months ago

+1

kulievvitaly commented 4 months ago

+1

github-actions[bot] commented 6 days ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!