Closed waghanza closed 5 years ago
Observing latency at different request rates (throughput), as opposed to benchmark the maximum capacity of a system, is what Vegeta was built for. Take it for a spin!
@OvermindDL1 any suggestion ?
Time to benchmark the benchmarkers? ^.^;
yeah, and it seem to clarify results :stuck_out_tongue:
This link seems to be interesting => https://github.com/denji/awesome-http-benchmark
I think about using http//k6.io
@OvermindDL1 any advice ?
I tested k6 in #802 and in short, it's not good. ^.^; Overall with the tests of a variety of tools in #802 wrk was still the best overall with apachebench close behind on low thread-counts (though if on many-core CPU's then it started doing pretty badly since it's only single threaded).
For note:
You said to me (on gitter) that wrk compute latency at max throughput, and it could be more realistic to compute it at constant throughput. Having this information, it could be useful to change (or not) of sieger.
Testing latency at a couple of constant throughputs is useful but also testing the maximum throughput of a server significantly matters to see how well it handles the primary failure condition and how well it is able to actually pump through requests as fast as possible. A ruby server may be able to answer a request in 20 milliseconds if it's only getting one at a time for example, but if it goes up to 20 seconds to answer a request when under even just a light load is not a great indicator for its reliability...
Thanks for all comments here, we will use wrk
then
Hi @fafhrd91,
You said to me (on
gitter
) thatwrk
compute latency at max throughput, and it could be more realistic to compute it at constant throughput. Having this information, it could be useful to change (or not) of sieger.There some :
The question here, is which of these tool (or another I don't know) has the more realistic results (for this use case ?)
Regards,