I'm trying to serve InternVL2 (llama3 76B) using lmdeploy as the example here on 4 A100-80G GPUs. However, I found that the server is always processing one request after another.
I've referred to Lmdeploy's Issue to adjust the --vision-max-batch-size to 8, but still can't see the server processing requests in parallel.
Is it possible to server a InternVL2-Llama3-76B where (dynamic) batching is enabled? By batching, I mean the server can generate response for multiple request at the same time (probably similar to vllm's continuous batching). If so, how to do it?
If not possible, is there someway that I can process batched inputs in one request, i.e. requesting 4 (or 8 etc.) different outputs together in one request?
I'm trying to serve InternVL2 (llama3 76B) using lmdeploy as the example here on 4 A100-80G GPUs. However, I found that the server is always processing one request after another. I've referred to Lmdeploy's Issue to adjust the --vision-max-batch-size to 8, but still can't see the server processing requests in parallel. Is it possible to server a InternVL2-Llama3-76B where (dynamic) batching is enabled? By batching, I mean the server can generate response for multiple request at the same time (probably similar to vllm's continuous batching). If so, how to do it? If not possible, is there someway that I can process batched inputs in one request, i.e. requesting 4 (or 8 etc.) different outputs together in one request?