Open philschmid opened 2 months ago
Hey @philschmid, I understand what you mean about this request. You'd specifically like to be able to keep a fixed number of concurrent requests over the life of the benchmark where as soon as one finishes it immediately starts a new one, is that correct? You can't easily figure out currently through the constant or poison rate types since those are set as the number of requests per second rather which you'd have to adjust those until you hit the average number of concurrent users, right?
Hey,
Yes. I am looking for a way to benchmark the load under e,g, 1, 2, 4, 8, 16, 32, 64, 128 concurrent users (send request -> wait for response, send again).
But looking into more benchmarks and dashboard, people seem to switch to QPS (what rate
should cover). So not sure how important this is.
Hello,
I am trying to integrate
guidellm
into a benchmark suite. And there we ran different load tests based on use concurrencies. We define user concurrenies as "users" that send requests after each other. Meaning send request -> wait for response -> send next request.I first assumed that's what is done with "constant" and "rate" but there is send way more requests as they are send per second. Is there a way to customize the "user concurrency"? I assume that concurrency == synchronous type. But would be create if i could do something like