jkorell / iperf

Automatically exported from code.google.com/p/iperf
Other
1 stars 0 forks source link

problem with iperf UDP tests #114

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
UDP reports 84% loss to matter what rate I use.

For example, 1G test:

Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total 
Datagrams
[  4]   0.00-10.00  sec  1.24 GBytes  1.06 Gbits/sec  0.001 ms  135795/162217 
(84%)
[  4] Sent 162217 datagrams

and a 20G test:
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total 
Datagrams
[  4]   0.00-10.00  sec  24.8 GBytes  21.3 Gbits/sec  0.002 ms  2723914/3250236 
(84%)
[  4] Sent 3250236 datagrams

100M test:
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total 
Datagrams
[  4]   0.00-10.00  sec   124 MBytes   104 Mbits/sec  0.038 ms  13456/15827 
(85%)

Original issue reported on code.google.com by bltier...@es.net on 26 Nov 2013 at 4:01

GoogleCodeExporter commented 9 years ago
Are you still getting high UDP packet losses?  I haven't seen this.  Possibly 
it got fixed along with the recent performance improvements.

Original comment by jef.posk...@gmail.com on 1 Dec 2013 at 2:15

GoogleCodeExporter commented 9 years ago
Yes, I still see 88% packet loss. 

 iperf3 -u -b1G -c 192.168.102.9
Connecting to host 192.168.102.9, port 5201
[  4] local 192.168.102.8 port 35631 connected to 192.168.102.9 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   116 MBytes   971 Mbits/sec              
[  4]   1.00-2.00   sec   128 MBytes  1.07 Gbits/sec              
[  4]   2.00-3.00   sec   128 MBytes  1.07 Gbits/sec              
[  4]   3.00-4.00   sec   128 MBytes  1.07 Gbits/sec              
[  4]   4.00-5.00   sec   128 MBytes  1.07 Gbits/sec              
[  4]   5.00-6.00   sec   128 MBytes  1.07 Gbits/sec              
[  4]   6.00-7.00   sec   128 MBytes  1.07 Gbits/sec              
[  4]   7.00-8.00   sec   128 MBytes  1.07 Gbits/sec              
[  4]   8.00-9.00   sec   128 MBytes  1.07 Gbits/sec              
[  4]   9.00-10.00  sec   128 MBytes  1.07 Gbits/sec              

Original comment by bltier...@es.net on 1 Dec 2013 at 3:51

GoogleCodeExporter commented 9 years ago
Take.

I've recreated this problem with 40G hosts on the testbed with the tip of the 
mainline, no significant changes from Comment 2:

[bmah@nersc-diskpt-6 ~]$ ./iperf3 -u -b1G -c 192.168.101.9 
Connecting to host 192.168.101.9, port 5201
[  4] local 192.168.101.8 port 56236 connected to 192.168.101.9 port 5201
[ ID] Interval           Transfer     Bandwidth       Total Datagrams
[  4]   0.00-1.00   sec   116 MBytes   972 Mbits/sec  14830  
[  4]   1.00-2.00   sec   128 MBytes  1.07 Gbits/sec  16384  
[  4]   2.00-3.00   sec   128 MBytes  1.07 Gbits/sec  16384  
[  4]   3.00-4.00   sec   128 MBytes  1.07 Gbits/sec  16384  
[  4]   4.00-5.00   sec   128 MBytes  1.07 Gbits/sec  16384  
[  4]   5.00-6.00   sec   128 MBytes  1.07 Gbits/sec  16381  
[  4]   6.00-7.00   sec   128 MBytes  1.07 Gbits/sec  16384  
[  4]   7.00-8.00   sec   128 MBytes  1.07 Gbits/sec  16385  
[  4]   8.00-9.00   sec   128 MBytes  1.07 Gbits/sec  16383  
[  4]   9.00-10.00  sec   128 MBytes  1.07 Gbits/sec  16384  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total 
Datagrams
[  4]   0.00-10.00  sec  1.24 GBytes  1.06 Gbits/sec  0.001 ms  138083/162214 
(85%)  
[  4] Sent 162214 datagrams

iperf Done.

-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.101.8, port 60520
[  5] local 192.168.101.9 port 5201 connected to 192.168.101.8 port 56236
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total 
Datagrams
[  5]   0.00-1.00   sec  16.1 MBytes   135 Mbits/sec  0.002 ms  12743/14805 
(86%)  
[  5]   1.00-2.00   sec  19.7 MBytes   165 Mbits/sec  0.002 ms  13861/16381 
(85%)  
[  5]   2.00-3.00   sec  19.7 MBytes   165 Mbits/sec  0.001 ms  13876/16396 
(85%)  
[  5]   3.00-4.00   sec  19.7 MBytes   165 Mbits/sec  0.002 ms  13859/16379 
(85%)  
[  5]   4.00-5.00   sec  19.7 MBytes   165 Mbits/sec  0.001 ms  13866/16386 
(85%)  
[  5]   5.00-6.00   sec  18.6 MBytes   156 Mbits/sec  0.009 ms  14018/16400 
(85%)  
[  5]   6.00-7.00   sec  19.0 MBytes   160 Mbits/sec  0.001 ms  13864/16301 
(85%)  
[  5]   7.00-8.00   sec  18.7 MBytes   157 Mbits/sec  0.001 ms  14001/16394 
(85%)  
[  5]   8.00-9.00   sec  18.6 MBytes   156 Mbits/sec  0.001 ms  13998/16384 
(85%)  
[  5]   9.00-10.00  sec  18.7 MBytes   157 Mbits/sec  0.001 ms  13997/16388 
(85%)  
[  5]  10.00-10.00  sec  0.00 Bytes  0.00 bits/sec  0.001 ms  0/0 (-nan%)  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total 
Datagrams
[  5]   0.00-10.00  sec  1.24 GBytes  1.06 Gbits/sec  0.001 ms  138083/162214 
(85%)  

Original comment by bmah@es.net on 8 Jan 2014 at 11:17

GoogleCodeExporter commented 9 years ago
Close as invalid for now.  After considerable testing, we note that we've only 
observed this problem on Mellanox interfaces in the ESnet 100G testbed, and 
there might be some correlations with other issues observed with other 
experiments.  However, running on the same hardware, we have not observed any 
abnormal loss on Myricom or Intel (e1000e) type interfaces.

Original comment by bmah@es.net on 17 Jan 2014 at 8:41