We currently display latency and concurrency. We should also calculate and display requests per second.
From our own load testing documentation elsewhere:
Requests Per Second
We arrive at the requests per second value using this formula:
( 1,000 / Latency ) * Concurrency
In plan english, this means one second divided by latency in milliseconds, multiplied by the level of concurrency equals requests per second.
Example
If we have a 200 ms response with a concurrency of 1,000 then you could fit 5 requests into 1 second for each concurrent connection. If you have 1,000 concurrent connections, you actually have 5,000 requests per second.
( 1,000 ms per second / 200 ms average latency ) * 1,000 concurrent requests = 5,000 requests per second
We currently display latency and concurrency. We should also calculate and display requests per second.
From our own load testing documentation elsewhere:
Requests Per Second
We arrive at the requests per second value using this formula:
In plan english, this means one second divided by latency in milliseconds, multiplied by the level of concurrency equals requests per second.
Example
If we have a 200 ms response with a concurrency of 1,000 then you could fit 5 requests into 1 second for each concurrent connection. If you have 1,000 concurrent connections, you actually have 5,000 requests per second.