vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.71k stars 4.66k forks source link

[Roadmap] vLLM Roadmap Q3 2024 #5805

Closed simon-mo closed 1 month ago

simon-mo commented 5 months ago

Update:


This document includes the features in vLLM's roadmap for Q3 2024. Please feel free to discuss and contribute, as this roadmap is shaped by the vLLM community.

Themes.

As before, we categorized our roadmap into 6 broad themes:

Broad Model Support

Help wanted:

Hardware Support

Performance Optimizations

Production Features

Help wanted

OSS Community

Help wanted

Extensible Architecture


If any of the item you wanted is not on the roadmap, your suggestion and contribution is still welcomed! Please feel free to comment in this thread, open feature request, or create an RFC.

Jeffwan commented 5 months ago

Support multiple models in the same server

Does vLLM need the multi-model support similar like what FastChat does or something else?

CSEEduanyu commented 5 months ago

https://github.com/vllm-project/vllm/pull/2809 hello,how about this?

jeejeelee commented 5 months ago

Hi, the issues were mentioned in https://github.com/vllm-project/vllm/pull/5036 and should be taken into account.

MeJerry215 commented 5 months ago

Will vLLM use Triton more to optimize operators' performance in future, or will it consider using the torch.compile mechanism more?

And are there any plans for this?

ashim-mahara commented 4 months ago

Hi! Is there or will there be support for the OpenAI Batch API ?

huseinzol05 commented 4 months ago

I am doing for Whisper, my fork at https://github.com/mesolitica/vllm-whisper, the frontend later should compatible with OpenAI API plus able to stream output tokens, few hiccups, still trying to figure out based on T5 branch, https://github.com/vllm-project/vllm/blob/9f20ccf56b63b0b47e09069615e023287f1681f8/vllm/model_executor/layers/enc_dec_attention.py#L83

  1. still try to figure out kv cache for Encoder hidden state or else each steps will recompute Encoder hidden state.
  2. No non causal attention for Encoder and Cross Attention in Decoder, seems like all attention implementation in VLLM is for causal
  3. Reuse KV Cache Cross Attention from the first step for the next steps.
huseinzol05 commented 4 months ago

Able to load and infer, https://github.com/mesolitica/vllm-whisper/blob/main/examples/whisper_example.py, but the output is still trash, might be bugs related to weights or the attention, still debugging

jkl375 commented 4 months ago

Do you have plans to support Ascend 910B in the future?

hibukipanim commented 4 months ago

Please consider prioritizing dynamic / just-in-time 8-bit quantization like EETQ which don't require offline quantization step. In example a current advantage of TGI is that you can load an original 16-bit hf model as int8 by just passing the --quantize eetq arg. AFAIK It's custom kernels handle outliers in higher precision during runtime, allowing it loose very little precision.

Previous mention in issues: https://github.com/vllm-project/vllm/issues/3261#issuecomment-1986438115 PR for it was opened but eventually closed: https://github.com/vllm-project/vllm/pull/3614

tutu329 commented 4 months ago

deepseek-v2 and deepseek-coder-v2 are supported now. but awq or gptq version are not supported so these model are still not usable due to their huge 236B.

also MLA(Multihead Latent Attention) of there model is not supported yet.

amritap-ef commented 4 months ago

Support for DoLa would be great!

robertgshaw2-neuralmagic commented 4 months ago

Please consider prioritizing dynamic / just-in-time 8-bit quantization like EETQ which don't require offline quantization step. In example a current advantage of TGI is that you can load an original 16-bit hf model as int8 by just passing the --quantize eetq arg. AFAIK It's custom kernels handle outliers in higher precision during runtime, allowing it loose very little precision.

Previous mention in issues: #3261 (comment) PR for it was opened but eventually closed: #3614

kaifronsdal commented 4 months ago

Please consider supporting transformer-based value models such as in the vllm fork https://github.com/MARIO-Math-Reasoning/vllm and the huggingface implementation https://huggingface.co/docs/trl/models#trl.AutoModelForCausalLMWithValueHead. The only thing that changes is adding a head to the end of the model to predict a value instead of logits. This would be a powerful addition to support very fast generation search and open up the possibility of more effective methods such as MCTS compared to traditional prompt based approaches such as self-consistency, CoT, ToT, etc.

haichuan1221 commented 4 months ago

Please consider supporting transformer-based value models such as in the vllm fork https://github.com/MARIO-Math-Reasoning/vllm and the huggingface implementation https://huggingface.co/docs/trl/models#trl.AutoModelForCausalLMWithValueHead. The only thing that changes is adding a head to the end of the model to predict a value instead of logits. This would be a powerful addition to support very fast generation search and open up the possibility of more effective methods such as MCTS compared to traditional prompt based approaches such as self-consistency, CoT, ToT, etc.

Thank you for your nice contribution! I wonder whether it is possible for you to fork a branch from vllm instead of creating new one so that anyone can see what changes in new contribution?

hibukipanim commented 4 months ago
  • Have you tried fp8 marlin? Run with --quantization fp8 and we will quantize the weights to fp8 in place. This will be faster and more accurate than eetq [note: requires ampere +]

yes thanks @robertgshaw2-neuralmagic, was trying it in recent days and it does look promising. happy to hear you believe it's more accurate than EETQ. I can confirm that Llama-70B-Instruct got almost same MMLU score with fp8 (80.56 vs 80.7).

Would be great if it could load and quant the layers iteratively, as now if the 16bit model can't fit in the GPU, we have to quant it offline first. But the fact there is an option to do "dynamic" quant without calibration data is great. thanks for this

robertgshaw2-neuralmagic commented 4 months ago
  • Have you tried fp8 marlin? Run with --quantization fp8 and we will quantize the weights to fp8 in place. This will be faster and more accurate than eetq [note: requires ampere +]

yes thanks @robertgshaw2-neuralmagic, was trying it in recent days and it does look promising. happy to hear you believe it's more accurate than EETQ. I can confirm that Llama-70B-Instruct got almost same MMLU score with fp8 (80.56 vs 80.7).

Would be great if it could load and quant the layers iteratively, as now if the 16bit model can't fit in the GPU, we have to quant it offline first. But the fact there is an option to do "dynamic" quant without calibration data is great. thanks for this

It should be more accurate and much much faster - so I think we will not prioritizing adding eetq ourselves (though we will of course accept a contribution)

Iterative quantization is on my list, ideally this week.

DarkLight1337 commented 4 months ago

Hi! Is there or will there be support for the OpenAI Batch API ?

vLLM currently has partial support for this (#4794).

w013nad commented 4 months ago

Hi! Is there or will there be support for the OpenAI Batch API ?

vLLM currently has partial support for this (#4794).

This requires a completely new instance of vLLM, It would be nice if we could just call an existing API with a batch request like you do with the OpenAI Batch API.

ashim-mahara commented 4 months ago

Hi! Is there or will there be support for the OpenAI Batch API ?

vLLM currently has partial support for this (#4794).

This requires a completely new instance of vLLM, It would be nice if we could just call an existing API with a batch request like you do with the OpenAI Batch API.

Exactly my thoughts. I could help with the build. I already have a nano-library that does interface with OpenAI directly at ashim-mahara/odbg.

The primary problem I have identified is with tracking the request origins in-case of dynamic batching by VLLM. The first one is easier if batches are executed sequentially but they would still need to be saved on the disk somewhere for retrieval later.

simon-mo commented 4 months ago

an existing API with a batch request like you do with the OpenAI Batch API.

@w013nad (or others), please feel free to open an RFC for this to discuss the ideal API. The main challenge is around file storage I believe.

warlockedward commented 4 months ago

Hopefully, the function_call and tool_choice features will be implemented faster and will additionally support models like Qwen2

akhilreddy0703 commented 3 months ago

Hi all,

CPU Optimizations to support GGUF models !!

My thoughts are, Adding CPU optimizations to the vLLM makes it more robust.

If anyone already looking into this please let me know, I want to work on this part, I'm open to help/contribute to this

Thanks

dongfangduoshou123 commented 3 months ago

Hopefully, the function_call and tool_choice features will be implemented faster and will additionally support models like Qwen2

ollama already support tool use in from version 0.3.0 see: https://ollama.com/blog/tool-support

fodevac33 commented 3 months ago

Any chance that you guys can implement Dry Repetition Penalty? I sorely miss it from backends like Oobabooga or Kobold.

yiakwy-xpu-ml-framework-team commented 3 months ago

We want to see more improvement on compiler since this is the major gap between vLLM and TRT-LLM (with meylin compiler) support.

B.t.w, what's your opinion with SGLang (they extensively use torch.compile to optimize the ML workload) and their released benchmark? @simon-mo

DarkLight1337 commented 3 months ago

Hi all,

CPU Optimizations to support GGUF models !!

My thoughts are, Adding CPU optimizations to the vLLM makes it more robust.

  • I know that ipex has already been added to the project
  • Project like Llamacpp has been a go to inference server when it comes to running models in lower precisions on CPU, even it is providing a http server to host a gguf model, but the problem with Llamacpp is it won't handle parallel requests like vLLM handles it.
  • I've tested Llamacpp server for Performance values for llama3-8b quantized model (with int4 precision), results are very promissing.
  • Adding the support for running quantized models (GGUF) on CPU using vLLM server would be a very considerable object for this roadmap

If anyone already looking into this please let me know, I want to work on this part, I'm open to help/contribute to this

Thanks

@akhilreddy0703 #5191 has just been merged, providing support for GGUF models.

gabrielmbmb commented 3 months ago

Hi, I would like to contribute to the Reward model API, do you have any suggestions or ideas in mind for this feature?

tsaoyu commented 3 months ago

Hi, I would like to contribute to the Reward model API, do you have any suggestions or ideas in mind for this feature?

A good start point might be some API similar to this https://github.com/OpenRLHF/OpenRLHF/pull/391/files

tsaoyu commented 3 months ago

Support multiple models in the same server

Does vLLM need the multi-model support similar like what FastChat does or something else?

Up for this, support multiple models or models at different version had good use case in the era of synthetic data. But I would suggest expose this feature in Engine level. My current recipe is using LangChain to abstract a layer on top of Ray, Ray is in charge of distributed model loading and inference.

amritap-ef commented 3 months ago

Is there a way to pass in custom decoding config in offline inference mode for different prompts i.e. use outlines to generate custom json output per prompt? It seems that currently, it is only possible to pass in a single decoding config to use for all prompts so would be great to have this feature!

yiakwy-xpu-ml-framework-team commented 3 months ago

Is there a way to pass in custom decoding config in offline inference mode for different prompts i.e. use outlines to generate custom json output per prompt? It seems that currently, it is only possible to pass in a single decoding config to use for all prompts so would be great to have this feature!

For offline inference mode will it be more efficient to organize data and create engine backend for each type of the prompts ?

I am more interested in online decision of the decoding config for different type of coming inputs. Instead of using a chain of inference , one to make such judgement one to do inference, it is worthy of trying to do it before prefill or with a few round of generations.

yiakwy-xpu-ml-framework-team commented 3 months ago

Hi, I would like to contribute to the Reward model API, do you have any suggestions or ideas in mind for this feature?

A good start point might be some API similar to this https://github.com/OpenRLHF/OpenRLHF/pull/391/files

Though you can accelerate generation of reward/critic from limited hands experiences with our MegatronPPOTrainerEngine, Reward model is exclusive to alignment of LLM, which is out of the scope of vLLM.

The challenge is huge memory required both for host cpu and its co-processor.

The memory pressure comes from the fact that shards of optimizers of actor (finetuned GPT head), critic model (initialized with reward model parameters) co-exist with the shards of model parameters (no DDP copies on other gpu parallel groups).

And in the last stage of pipeline of model, we need a full copy of an actor and a reward, which achieves the peak memory usage of whole PPO training PP stages.

It is very complex situation; you cannot simply tackle this by hosting the frozen model outside of training gpus. vLLM does provide serving mode and you can make use of it.

So my suggestion is, keep the relevant alignment features solely in the relevant repositories.

@gabrielmbmb

amritap-ef commented 3 months ago

Is there a way to pass in custom decoding config in offline inference mode for different prompts i.e. use outlines to generate custom json output per prompt? It seems that currently, it is only possible to pass in a single decoding config to use for all prompts so would be great to have this feature!

For offline inference mode will it be more efficient to organize data and create engine backend for each type of the prompts ?

I am more interested in online decision of the decoding config for different type of coming inputs. Instead of using a chain of inference , one to make such judgement one to do inference, it is worthy of trying to do it before prefill or with a few round of generations.

The trouble in my use case is that each prompt requires a slightly different schema for the json depending on input to the prompt. Would be great if this could be treated similar to online inference in that sense.

agm-eratosth commented 3 months ago

Hi what happened to "ARM aarch-64 support for AWS Graviton based instances and GH200" from the Q2 2024 roadmap? https://github.com/vllm-project/vllm/issues/3861

ayush9818 commented 3 months ago

Hi, I wanted to contribute to Multi token prediction feature. Is there any feature requirement or starting point for this ?

Here is what I have got: https://github.com/vllm-project/vllm/issues/5683. What kind of LLM Class can be a good starting point for this?

nivibilla commented 3 months ago

Hey can this be looked at please. I'm not able to run any mixture of experts models on L4 gpus (EC2 G6) instances due to the Triton issue mentioned

niuzheng168 commented 2 months ago

More and more speech model is using a LLM to predict non-text tokens. Like ChatTTS or FishTTS, they are all using a llama to predict speech tokens.
But unlike llama for text, the speech-llama will use a multiple lm_head to predict more than 1 tokens in parallel, and therefor sum the n-tokens embedding when processing the llm input embedding . I am currently trying to make chattts running with vllm, see here, but lots code need to update and seems break some fundamental design. So maybe you can consider support it officially. It will definitely make more impact to the speech solutions.

ChengyuZhu6 commented 2 months ago

Support multiple models in the same server

Does vLLM need the multi-model support similar like what FastChat does or something else?

Up for this, support multiple models or models at different version had good use case in the era of synthetic data. But I would suggest expose this feature in Engine level. My current recipe is using LangChain to abstract a layer on top of Ray, Ray is in charge of distributed model loading and inference.

I think this is the difference in implementation at different granularities.

Shreyansh1311 commented 2 months ago

Any chance that you guys can implement Dry Repetition Penalty? I sorely miss it from backends like Oobabooga or Kobold.

Hi, it would be really great to have DRY implemented in vLLM, DRY has been a game changer for all the small models, since they tend to repeat much more. It's a really effective sampling method. It would be really useful to have it here as well

ksjadeja commented 1 month ago

Do we have plans to support #5540? We are having a production level use case and would really appreciate if someone can look into it for Q4 onwards.

Yikun commented 1 month ago

Any chance to support Ascend NPU as vLLM backend in 2024 Q4 roadmap?

simon-mo commented 1 month ago

We have published our Q4 roadmap here https://github.com/vllm-project/vllm/issues/9006. Feedback welcomed! If there's an feature you would like to propose that didn't get to done Q3, please post it in #9006 and we will continue to monitor and try to prioritize. Thank you for all the feedback 🙏