Closed GlenTiki closed 8 years ago
I disagree with this feature. The reason why wrk
is so good, it is because it drives load to the maximum amount of what the target can handle (and the given host produce). Specifying a number of requests per second means second-guessing the maximum load.
ab
does that. I typically use ab
to crash systems, because it will throw that number of request per second no matter what.
I think the actual answer to this issue is to document why specifying the number of request is useless.
I was basing the rate
feature off of wrk2
and vegeta
. If you don't want to support this though, I'll document it as a design choice. :)
It is a design choice. I'm fine with a PR implemented as "as much the given throughput", rather than "constant throughput". This is how wrk2 is implemented.
The usual task is "how much load can this take on X number of connection?". Specifying also the max rate is useful in some conditions (what is the latency at 2000 req/s), but it is less frequent.
So, let's document the current behavior, and keep this open for a low priority feature.
In the docs it said "greatly inspired by wrk and wrk2" is the only difference between wrk and wrk2 is rate limit, which makes totally not inspired by wrk2. :)
PS. Rate limit seems to be more useful to me than number of connections, although I probably missing something from the big picture. Would appreciate more detailed explanation. Thank you.
In the docs it said "greatly inspired by wrk and wrk2" is the only difference between wrk and wrk2 is rate limit, which makes totally not inspired by wrk2. :)
IMHO the main difference between wrk and wrk2 it is the use of hdr histogram, which this library uses as well.
Rate limit will be a nice feature to add, we just don't need it right now. Anyway, both you and @thekemkid are right, we need to get this going.
Many benchmarking tools allow users to specify the rate of requests per second. It would be nice to support this. If the rate cannot be met, we should just do as many requests as we can. If no rate is specified, we should do as many requests as possible.
@mcollina We'll need to chat about this, as I'm not sure how the code for this would look or work. Rate limiting on a huge rate that cannot be met will add an unnecessary overhead.