Netperf is a benchmark that can be used to measure the performance of many different types of networking. It provides tests for both unidirectional throughput, and end-to-end latency.
MIT License
874
stars
187
forks
source link
Determine output format of results: ability to pass --output-format flag #66
Netperf prints directly into stdout in raw format, such as the following:
"MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 20001
AF_INET to 104.154.50.86 () port 20001 AF_INET : +/-2.500% @ 99% conf.
: first burst 0",\n
Throughput,Throughput Units,Throughput Confidence Width (%),
Confidence Iterations Run,Stddev Latency Microseconds,
50th Percentile Latency Microseconds,90th Percentile Latency Microseconds,
99th Percentile Latency Microseconds,Minimum Latency Microseconds,
Maximum Latency Microseconds\n
1405.50,Trans/s,2.522,4,783.80,683,735,841,600,900\n
We can use output selectors to enrich the results, as you already know.
Motivation
Currently, we have to write a wrapper application that reads and scrapes netperf's raw output and converts to target format, so we can do further analyzes on the predefined spec.
By defining a spec, other apps or wrapping tools that depend on this project couldn't create different (inconsistent each other) custom output specs anymore.
Are there any performance benchmarking output spec definitions / organizations?
Since we already have some related tools that covers this requirement, I'm not so sure whether we should built-in support for this. I think it's out-of-scope feature for this project. From the Unix philosophy perspective; do one thing and do it well, right?
I couldn't find any related issue, so I'm throwing here for further discussing.
Description
Netperf prints directly into stdout in raw format, such as the following:
We can use output selectors to enrich the results, as you already know.
Motivation
Currently, we have to write a wrapper application that reads and scrapes netperf's raw output and converts to target format, so we can do further analyzes on the predefined spec.
By defining a spec, other apps or wrapping tools that depend on this project couldn't create different (inconsistent each other) custom output specs anymore.
Are there any performance benchmarking output spec definitions / organizations?
Related Works
Since we already have some related tools that covers this requirement, I'm not so sure whether we should built-in support for this. I think it's out-of-scope feature for this project. From the Unix philosophy perspective; do one thing and do it well, right?
I couldn't find any related issue, so I'm throwing here for further discussing.
Waiting your thoughts!