Open alanxmay opened 1 year ago
How do you plan on adding batched support for Exllama? I am very interested in your approach as I am trying to work on that too
ExLlamaV2 has taken over ExLlama in quantization performance for most cases. I hope we can get it implemented in vLLM because it is also an incredible quantization technique. Benchmarks between all the big quantization techniques indicate ExLlamaV2 is the best out of all of them. Have there been any new developments since it was added to the roadmap?
Please, having exllamav2 with paged attention and with continuous batching would be a big win for the LLM world
Also looking forward to exllamav2 support
I was hoping this would be possible, too. I recently worked with the Mixtral-8x7b Model; AWQ 4-bit had significant OOM / Memory overhead compared to ExLlama2 in 4-Bit; also I ended up just running the model in 8-bit using ExLlama2, since that turned out to be the best compromise between model capabilities and VRAM footprint. I can run it in 8-bit on 3x3090 and use full 32k context with ExLlama2; but I need 4x3090 to be even able to load it in 16-bit within VLLM; and i reach OOM when I try to use full context.
So this would definitely be an amazing addition to have more flexibility in terms of VRAM-Resources.
+1
+1
+1
+1
+1
This will be the biggest release for vllm to support exllamav2. +1
+1
+1
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
ExLlama (https://github.com/turboderp/exllama)
It's currently the fastest and most memory-efficient executor of models that I'm aware of.
Is there an interest from the maintainers in adding this support?