Open sankalp-khare opened 3 months ago
Hi @sankalp-khare
Indeed these kind of informations could be useful. We need to find the right balance between too fee and too many informations.
By "latency" you mean "request duration" (time between the start of the response and the end of the response transfer) right?
We can bikeshed some outputs(taking k6 as an inspiration):
Executed files: 50
Executed requests: 50 (5.8/s)
Request durations: 976 ms (average) 701 ms (median) 1400 ms (p95)
Succeeded files: 50 (100.0%)
Failed files: 0 (0.0%)
Total duration: 8559 ms
Yes I mean request duration. Putting all the aggregate stats in a single line looks good to me!
It will also be useful to print a distribution of received response codes --- for each code seen in the responses, we want to show how many requests received that response code (count), what percent of the total responses were that response code.
We need to find a balance for the test summary between too much informations and too few. To perform solid stats on tests, you can use --json
option. This structured view of a test should be sufficient to export indicators.
Executed files: 50
Executed requests: 50 (5.8/s)
Request
duration: 4000 ms
average: 976 ms
median: 701 ms
min: 976 ms
max: 701 ms
p90: 100 ms
p95: 100 ms
Succeeded files: 50 (100.0%)
Failed files: 0 (0.0%)
Total duration: 8559 ms
Problem to solve
If I use something like
--repeat 10000
I get the response time of each request in its line, but what's more useful to me as a user would be aggregate information like the mean, median, p95 etc. of the response timesProposal
In addition to "Duration:" which prints the total runtime of the hurl session, add some more details to the summary that help the user understand the request latency trend observed during their session. I imagine something like
Additional context and resources
This felt like something I would want when I first used
hurl
today to test an endpoint with the--repeat
option.Tasks to complete