vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.93k stars 4.7k forks source link

[Performance]: InternVL multi image speed is not improved compare to original #9483

Open luohao123 opened 1 month ago

luohao123 commented 1 month ago

Your current environment

The output of `python collect_env.py` ```text latest vllm 0.6.1 ```

Model Input Dumps

tt

🐛 Describe the bug

InternVL multi image speed is slower than original

Before submitting a new issue...

DarkLight1337 commented 1 month ago

Could you elaborate more? What do you mean by the original speed?

luohao123 commented 1 month ago

Compare with torch, same device, same dtype (float16, V100). (torch means hf with flashattn default)

Single image faster about 20%, while multiple image are slower, A100 got same result.

DarkLight1337 commented 1 month ago

Can you show the scripts you used to measure the performance of HF vs vLLM?

luohao123 commented 1 month ago

Hi, the test based on internvl 8b model, have u guys tested vllm speed improvement on multiple images? I am not lying. Multiple images actually slower than torch. For some inhouse issue, I didn't got a chance to paste code here, but I think you guys might can be easily replicate the result.

DarkLight1337 commented 1 month ago

Hi, the test based on internvl 8b model, have u guys tested vllm speed improvement on multiple images? I am not lying. Multiple images actually slower than torch. For some inhouse issue, I didn't got a chance to paste code here, but I think you guys might can be easily replicate the result.

No, we have not tested the speed for multiple images (benchmarking work for multi-modal models is still in the early stages). Since vLLM was originally designed around language generation, most of vLLM's optimizations don't currently work on the vision encoder part of the model, which may explain the decrease in speed when more images are passed. There may also be CPU bottlenecks associated with image preprocessing.

We are still busy with making multi-modal support feature complete, so it may take a while before we can focus on optimization - any help is welcome!

luohao123 commented 1 month ago

Hi, am not expert in accelerate, but as far as I can understand, why encoder-decoder can not use flashattn?

luohao123 commented 1 month ago

How about using flashatten2 package, or using torch inside sdpa?

Jeremy-J-J commented 1 month ago

Same problem

torinchen commented 1 month ago

noooop

when diving into the code, i found that internvl2 uses xformer's attention not the naive one, so, the slow speed maybe come from other part.

torinchen commented 4 weeks ago

when diving into the code, i found that internvl2 uses xformer's attention not the naive one, so, the slow speed maybe come from other part.

@torinchen

https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/intern_vit.py#L271C1-L285C17

The code of vllm is being optimized rapidly, but at least today 20241029 is still using scaled_dot_product_attention

        if self.qk_normalization:
            B_, N_, H_, D_ = q.shape
            q = self.q_norm.forward_native(q.flatten(-2,
                                                     -1)).view(B_, N_, H_, D_)
            k = self.k_norm.forward_native(k.flatten(-2,
                                                     -1)).view(B_, N_, H_, D_)
        q = q.transpose(1, 2)
        k = k.transpose(1, 2)
        v = v.transpose(1, 2)

        x = F.scaled_dot_product_attention(q, k, v, scale=self.scale)
        x = x.transpose(1, 2).view(B, N, -1)

        x = self.proj(x)
        return x

https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/intern_vit.py#L353C4-L371C76

noooop commented 4 weeks ago

when diving into the code, i found that internvl2 uses xformer's attention not the naive one, so, the slow speed maybe come from other part.

vllm installs xformers by default, so internvl2 uses xformer's attention.

Although xformer is slower than flash attention, it is not significant.

I agree with @torinchen "the slow speed maybe come from other part."

noooop commented 4 weeks ago

Please submit a test code to reproduce this issues.

I can help locate the problem using profiler.

torinchen commented 4 weeks ago

test.zip i test the online and offline mode, the gap is significant (under lmdeploy, the gap is zero btw ) mail: chen.xin.mail@foxmail.com

noooop commented 4 weeks ago

Can this issues be reproduced using InternVL2-8B?

https://huggingface.co/OpenGVLab/InternVL2-8B

torinchen commented 4 weeks ago

yes, my model is just a sft version InternVL2-8B

noooop commented 4 weeks ago

@torinchen @luohao123

I can't reproduce this issues

code https://github.com/noooop/light-vllm/tree/main/benchmarks/InternVL2

image preprocessing time is not included

transformers 4.37.2 + flash_attn 2.6.3

use_flash_attn=True single-image single-round conversation 1.37133834199999 multi-image single-round conversation 3.133497854799998

transformers 4.45.2 + flash_attn 2.6.3 use_flash_attn=True

single-image single-round conversation 1.4907942284 multi-image single-round conversation 3.1399439033000136

transformers 4.45.2 + vllm==v0.6.3.post1

single-image single-round conversation 1.367961298399996 multi-image single-round conversation 2.787156264600026

I'm not sure if it's related to the slow speed of image preprocessing #9238

luohao123 commented 4 weeks ago

Even though I think the time speed up is very limited, can be treated as not fast as expect.

I forgot my precise number before, but from my human feeling, without streaming, the response of vllm is not fast.

My image is not big, just normal 800 maxium input.

noooop commented 4 weeks ago

Even though I think the time speed up is very limited, can be treated as not fast as expect.

For a single request, flash attn is already very fast.

vllm can only batched multiple requests to increase throughput.

I didn't forgot my precise number before, but from my human feeling, without streaming, the response of vllm is not fast.

Do you use openai.api_server or offline inference.

torinchen commented 4 weeks ago

@torinchen @luohao123

I can't reproduce this issues

code https://github.com/noooop/light-vllm/tree/main/benchmarks/InternVL2

image preprocessing time is not included

transformers 4.37.2 + flash_attn 2.6.3

use_flash_attn=True single-image single-round conversation 1.37133834199999 multi-image single-round conversation 3.133497854799998

transformers 4.45.2 + flash_attn 2.6.3 use_flash_attn=True

single-image single-round conversation 1.4907942284 multi-image single-round conversation 3.1399439033000136

transformers 4.45.2 + vllm==v0.6.3.post1

single-image single-round conversation 1.367961298399996 multi-image single-round conversation 2.787156264600026

I'm not sure if it's related to the slow speed of image preprocessing #9238

try online mode, i saw speed gap there.

noooop commented 3 weeks ago

try online mode, i saw speed gap there.

Sorry, I'm not very familiar with the webserver part.

Many issues mentioned that image preprocessing is slow. I think it is more likely to be caused by this problem.