neuralmagic / guidellm

Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs
Apache License 2.0
159 stars 11 forks source link

[Rate Type] Concurrencies #47

Open philschmid opened 2 months ago

philschmid commented 2 months ago

Hello,

I am trying to integrate guidellm into a benchmark suite. And there we ran different load tests based on use concurrencies. We define user concurrenies as "users" that send requests after each other. Meaning send request -> wait for response -> send next request.

I first assumed that's what is done with "constant" and "rate" but there is send way more requests as they are send per second. Is there a way to customize the "user concurrency"? I assume that concurrency == synchronous type. But would be create if i could do something like

guidellm --target "http://localhost:8080/v1" --model "meta-llama/Meta-Llama-3.1-8B-Instruct"  --data-type emulated --data "prompt_tokens=550,generated_tokens=250" --max-seconds 60 --rate-type concurrent --rate 1 --rate 2 --rate 10 --rate 50 --output-path r.json
markurtz commented 1 month ago

Hey @philschmid, I understand what you mean about this request. You'd specifically like to be able to keep a fixed number of concurrent requests over the life of the benchmark where as soon as one finishes it immediately starts a new one, is that correct? You can't easily figure out currently through the constant or poison rate types since those are set as the number of requests per second rather which you'd have to adjust those until you hit the average number of concurrent users, right?

philschmid commented 1 month ago

Hey,

Yes. I am looking for a way to benchmark the load under e,g, 1, 2, 4, 8, 16, 32, 64, 128 concurrent users (send request -> wait for response, send again).

But looking into more benchmarks and dashboard, people seem to switch to QPS (what rate should cover). So not sure how important this is.