vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.19k stars 3.99k forks source link

[Performance]: Discussion about optimizing _prepare_model_input_tensors #6684

Open phantomlei3 opened 2 months ago

phantomlei3 commented 2 months ago

Misc discussion on performance

Checking #6164, _prepare_model_input_tensors has been refactored for the purpose of performance improvement. I investigated the performance of _prepare_model_input_tensors with respect to different batch sizes, input sequence length, output sequence length and tensor parallel nums through running benchmark_latency.py. I found a directly proportional relationship between the time duration of _prepare_model_input_tensors and batch sizes (aka seq_group), which is a obvious operation that can be speeded up through parallelizing the loop in _prepare_model_input_tensors. Here comes my questions related to the follow up mentioned in #6164:

  1. What will the design of "Parallelize the loop for seq_group_metadata in seq_group_metadata_list " to speed up? Using threadpool?
  2. Are we going to implement a cuda kernel that can "Remove the loop for seq_id in seq_ids in ModelRunnerInputBuilder._add_seq_group()"?
  3. When will these follow-up optimizations be available? I would like to know if I can give contributions.
phantomlei3 commented 2 months ago

@comaniac Hope you can answer these questions.

comaniac commented 2 months ago

Hey @phantomlei3 thanks for your interest and questions!

What will the design of "Parallelize the loop for seq_group_metadata in seq_group_metadata_list " to speed up? Using threadpool?

Thread pool may not be helpful because Python doesn't have true multi-threading yet and this loop is not I/O bound. Ideally we should consider multi-processing but this definitely needs some investigations.

Are we going to implement a cuda kernel that can "Remove the loop for seq_id in seq_ids in ModelRunnerInputBuilder._add_seq_group()"?

After refactoring this loop doesn't include any GPU operations. Instead it only processes inputs in Python, and .build() is now in charge of moving them to GPU. The major optimization I'm thinking here is leveraging efficient CPU tensor operations such as numpy to speedup.

When will these follow-up optimizations be available? I would like to know if I can give contributions.

There's no concrete timeline yet but your contributions are definitely welcome. Are you in the vLLM discord? Please ping me there (the same ID as my GitHub one) and we could discuss details.