OpenGVLab / InternVL

[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
https://internvl.readthedocs.io/en/latest/
MIT License
5.48k stars 425 forks source link

(Continuous) Batch Serving Internvl2 using lmdeploy #523

Closed zzjchen closed 2 weeks ago

zzjchen commented 3 weeks ago

I'm trying to serve InternVL2 (llama3 76B) using lmdeploy as the example here on 4 A100-80G GPUs. However, I found that the server is always processing one request after another. I've referred to Lmdeploy's Issue to adjust the --vision-max-batch-size to 8, but still can't see the server processing requests in parallel. Is it possible to server a InternVL2-Llama3-76B where (dynamic) batching is enabled? By batching, I mean the server can generate response for multiple request at the same time (probably similar to vllm's continuous batching). If so, how to do it? If not possible, is there someway that I can process batched inputs in one request, i.e. requesting 4 (or 8 etc.) different outputs together in one request?

G-z-w commented 2 weeks ago

You can use batch inference, or you can deploy an openai api server and then request it.

zzjchen commented 2 weeks ago

Thanks, I've understood the whole process through lmdeploy's issue