Closed XNinety9 closed 1 year ago
Hi, sorry for the delay! The latencies are computed based on the errors, I'm guessing that it took some time to get errors: latencies are not reliable in this case, just a function of how long it takes to error. Also rps achieved can be lower than sent: if we send 100 requests in 1 second and it takes 2 seconds to get the responses, then input load is 100 rps but rps achieved is 50 rps. Hope it's clear!
Hi. I've been using your library for a while now, so far so good. Lately, I've been working on a load test where I stress the target a lot in order to provoke errors.
The target can take up to N requests per seconds, and I load it with a progressive workload that climbs up to N+20% rq/s. That progressive workload is cut in several parts, each during 20 seconds and carrying a specific load. The 20s limit is enforced by the
maxSeconds
parameter.At the beginning, everything's fine, the logs are correct:
In this segment, the input load was 163 rq/s and the log says it went up to 159. Hm ok, why not.
But when the load increases and errors stacks up, the logs turns a bit weird:
Here, the input load was 244 rq/s, but loadtest achieved 175. What really bugs me is that it also says that all the requests sent never returned and were considered as errors. Shouldn't rps be equal to the input load, whatever happens?
Another thing: if all requests returned as errors, where does the latencies values come from?