I've seen this consistently using EC2 instances on AWS, for e.g using Amazon Linux 2 on c5.xlarge which is documented to support 5Gbps single flow and up to 10Gbps multi flows.
I'm testing single flow as it's the easiest way to reproduce the issue.
Client side is using ethr built from source on March 25th:
[ssm-user@ip-172-31-10-116 ethr]$ uname -a
Linux ip-172-31-10-116.eu-west-3.compute.internal 5.4.95-42.163.amzn2.x86_64 #1 SMP Thu Feb 4 12:50:05 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
[ssm-user@ip-172-31-10-116 ethr]$ ./ethr -c 172.31.15.12
Notice server side showing 5.69G Bits/s when I'd expect 4.97G
Is there a specific encapsulation happening perhaps ?
In comparison, on the same machines, iperf 2.1.1 shows 4.97Gbps on the server side:
[ssm-user@ip-172-31-15-12 ethr]$ iperf -s
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
[ 1] local 172.31.15.12 port 5001 connected with 172.31.10.116 port 46098
[ ID] Interval Transfer Bandwidth
[ 1] 0.00-10.00 sec 5.79 GBytes 4.97 Gbits/sec
I've seen this consistently using EC2 instances on AWS, for e.g using Amazon Linux 2 on c5.xlarge which is documented to support 5Gbps single flow and up to 10Gbps multi flows. I'm testing single flow as it's the easiest way to reproduce the issue.
Client side is using ethr built from source on March 25th: [ssm-user@ip-172-31-10-116 ethr]$ uname -a Linux ip-172-31-10-116.eu-west-3.compute.internal 5.4.95-42.163.amzn2.x86_64 #1 SMP Thu Feb 4 12:50:05 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux [ssm-user@ip-172-31-10-116 ethr]$ ./ethr -c 172.31.15.12
Ethr: Comprehensive Network Performance Measurement Tool (Version: UNKNOWN) Maintainer: Pankaj Garg (ipankajg @ LinkedIn | GitHub | Gmail | Twitter)
Using destination: 172.31.15.12, ip: 172.31.15.12, port: 8888 [ 16] local 172.31.10.116 port 32967 connected to 172.31.15.12 port 8888
[ ID ] Protocol Interval Bits/s [ 16] TCP 000-001 sec 4.99G [ 16] TCP 001-002 sec 4.97G [ 16] TCP 002-003 sec 4.97G [ 16] TCP 003-004 sec 4.97G [ 16] TCP 004-005 sec 4.97G [ 16] TCP 005-006 sec 4.98G [ 16] TCP 006-007 sec 4.97G [ 16] TCP 007-008 sec 4.98G [ 16] TCP 008-009 sec 4.97G [ 16] TCP 009-010 sec 4.97G Ethr done, duration: 10s. Hint: Use -d parameter to change duration of the test. [ssm-user@ip-172-31-10-116 ethr]$ ethtool -S eth0 | grep exceeded bw_in_allowance_exceeded: 0 bw_out_allowance_exceeded: 0 pps_allowance_exceeded: 0 conntrack_allowance_exceeded: 0 linklocal_allowance_exceeded: 0
Server side: [ssm-user@ip-172-31-15-12 ethr]$ ./ethr -s
Ethr: Comprehensive Network Performance Measurement Tool (Version: UNKNOWN) Maintainer: Pankaj Garg (ipankajg @ LinkedIn | GitHub | Gmail | Twitter)
Accepting IP version: ipv4, ipv6 Listening on port 8888 for TCP & UDP
[RemoteAddress] Proto Bits/s Conn/s Pkt/s Latency [172.31.10.116] TCP 5.59G 1 -- --
[172.31.10.116] TCP 5.71G 0 -- --
[172.31.10.116] TCP 5.57G 0 -- --
[172.31.10.116] TCP 5.59G 0 -- --
[172.31.10.116] TCP 5.66G 0 -- --
[172.31.10.116] TCP 5.76G 0 -- --
[172.31.10.116] TCP 5.80G 0 -- --
[172.31.10.116] TCP 5.55G 0 -- --
[172.31.10.116] TCP 5.52G 0 -- --
[172.31.10.116] TCP 5.69G 0 -- --
Notice server side showing 5.69G Bits/s when I'd expect 4.97G
Is there a specific encapsulation happening perhaps ?
In comparison, on the same machines, iperf 2.1.1 shows 4.97Gbps on the server side:
[ssm-user@ip-172-31-15-12 ethr]$ iperf -s Server listening on TCP port 5001 TCP window size: 85.3 KByte (default)
[ 1] local 172.31.15.12 port 5001 connected with 172.31.10.116 port 46098 [ ID] Interval Transfer Bandwidth [ 1] 0.00-10.00 sec 5.79 GBytes 4.97 Gbits/sec