Closed venkataanil closed 9 months ago
Hello @venkataanil Can we also have the driver details on RR output
Hello @venkataanil Can we also have the driver details on RR output
done, thanks
"netserver && iperf3 -s -p 22865 && uperf -s -v -P 30000 && sleep 10000000" So we are creating a separate container in server pod for uperf server.
Why?, because iperf and uperf server mode run in foreground?
You can always use &
to run them in background with something like:
$ (netserver & iperf3 -s -p 22865 & uperf -s -v -P 30000) && sleep inf
$ ps -ef | egrep "uperf|netserver|iperf"
egrep: warning: egrep is obsolescent; using grep -E
rsevilla 839931 839929 0 22:57 pts/3 00:00:00 iperf3 -s -p 22865
rsevilla 839932 839929 0 22:57 pts/3 00:00:00 uperf -s -v -P 30000
rsevilla 839933 3241 0 22:57 ? 00:00:00 netserver
Thanks, I tried only in foreground . Moving them to containers is a good option. Lets continue it.
Thanks, I tried only in foreground . Moving them to containers is a good option. Lets continue it.
I'm not sure what changes you've added in the last commit but I still see multiple containers for the servers
Thanks, I tried only in foreground . Moving them to containers is a good option. Lets continue it.
I'm not sure what changes you've added in the last commit but I still see multiple containers for the servers
@rsevilla87 I would like to get 1 more +1 -- let me know if you feel there is anything else for @venkataanil to address.
Does uperf not provide re-transmission info?
+---------------------+---------+------------+-------------+--------------+---------+--------------+-----------+----------+---------+------------+
| TYPE | DRIVER | SCENARIO | PARALLELISM | HOST NETWORK | SERVICE | MESSAGE SIZE | SAME NODE | DURATION | SAMPLES | AVG VALUE |
+---------------------+---------+------------+-------------+--------------+---------+--------------+-----------+----------+---------+------------+
| TCP Retransmissions | netperf | TCP_STREAM | 1 | false | false | 1024 | false | 30 | 3 | 38.333333 |
| TCP Retransmissions | uperf | TCP_STREAM | 1 | false | false | 1024 | false | 30 | 3 | 0.000000 |
| TCP Retransmissions | netperf | TCP_STREAM | 2 | false | false | 1024 | false | 30 | 3 | 47.000000 |
| TCP Retransmissions | uperf | TCP_STREAM | 2 | false | false | 1024 | false | 30 | 3 | 0.000000 |
| TCP Retransmissions | netperf | TCP_STREAM | 1 | false | false | 8192 | false | 30 | 3 | 40.333333 |
| TCP Retransmissions | uperf | TCP_STREAM | 1 | false | false | 8192 | false | 30 | 3 | 0.000000 |
| TCP Retransmissions | netperf | TCP_STREAM | 2 | false | false | 8192 | false | 30 | 3 | 170.000000 |
| TCP Retransmissions | uperf | TCP_STREAM | 2 | false | false | 8192 | false | 30 | 3 | 0.000000 |
| UDP Loss Percent | netperf | UDP_STREAM | 1 | false | false | 1024 | false | 30 | 3 | 2.435977 |
| UDP Loss Percent | uperf | UDP_STREAM | 1 | false | false | 1024 | false | 30 | 3 | 0.000000 |
+---------------------+---------+------------+-------------+--------------+---------+--------------+-----------+----------+---------+------------+
Thanks, I tried only in foreground . Moving them to containers is a good option. Lets continue it.
I'm not sure what changes you've added in the last commit but I still see multiple containers for the servers
Lets go with multiple containers approach only. So I retained multiple containers code in this patch.
Does uperf not provide re-transmission info?
+---------------------+---------+------------+-------------+--------------+---------+--------------+-----------+----------+---------+------------+ | TYPE | DRIVER | SCENARIO | PARALLELISM | HOST NETWORK | SERVICE | MESSAGE SIZE | SAME NODE | DURATION | SAMPLES | AVG VALUE | +---------------------+---------+------------+-------------+--------------+---------+--------------+-----------+----------+---------+------------+ | TCP Retransmissions | netperf | TCP_STREAM | 1 | false | false | 1024 | false | 30 | 3 | 38.333333 | | TCP Retransmissions | uperf | TCP_STREAM | 1 | false | false | 1024 | false | 30 | 3 | 0.000000 | | TCP Retransmissions | netperf | TCP_STREAM | 2 | false | false | 1024 | false | 30 | 3 | 47.000000 | | TCP Retransmissions | uperf | TCP_STREAM | 2 | false | false | 1024 | false | 30 | 3 | 0.000000 | | TCP Retransmissions | netperf | TCP_STREAM | 1 | false | false | 8192 | false | 30 | 3 | 40.333333 | | TCP Retransmissions | uperf | TCP_STREAM | 1 | false | false | 8192 | false | 30 | 3 | 0.000000 | | TCP Retransmissions | netperf | TCP_STREAM | 2 | false | false | 8192 | false | 30 | 3 | 170.000000 | | TCP Retransmissions | uperf | TCP_STREAM | 2 | false | false | 8192 | false | 30 | 3 | 0.000000 | | UDP Loss Percent | netperf | UDP_STREAM | 1 | false | false | 1024 | false | 30 | 3 | 2.435977 | | UDP Loss Percent | uperf | UDP_STREAM | 1 | false | false | 1024 | false | 30 | 3 | 0.000000 | +---------------------+---------+------------+-------------+--------------+---------+--------------+-----------+----------+---------+------------+
I used benchmark-wrapper and benchmark-operator as base for running uperf and parsing results. uperf wrapper https://github.com/cloud-bulldozer/benchmark-wrapper/blob/master/snafu/benchmarks/uperf/uperf.py was not having info related to re-transimmion, so I couldn't add it. I will push another PR for re-transmition and additional changes.
Similar to iperf, user can run uperf along with netperf using "--uperf" option
benchmark-wrapper is used a reference for parsing 1) user options and creating uperf config file (input to uperf client command) 2) uperf output
uperf driver supports only TCP_STREAM, UPD_STREAM, TCP_RR, and UDP_RR tests. For each test in full-run.yaml, Uperf driver will create a uperf profile file inside the client pod and uses it to run the test. Parallelism is implemented using uperf's nproc option
uperf server can't be run using "&&" option in the same container inside server pod i.e "netserver && iperf3 -s -p 22865 && uperf -s -v -P 30000 && sleep 10000000" So we are creating a separate container in server pod for uperf server.
Type of change
Description
Related Tickets & Documents
Checklist before requesting a review
Testing