esnet / iperf

iperf3: A TCP, UDP, and SCTP network bandwidth measurement tool
Other
6.98k stars 1.28k forks source link

Incorrect and impossible upload result using TCP #1069

Open olekstomek opened 4 years ago

olekstomek commented 4 years ago

Context

Bug Report

Look at this: It's client. 5 Gbits/sec is impossible. image

It's server in the same time. image

I have now changed that the former client is now the server and the server is now the client. This is a screen from the client. Over 12 Gbit/s. Take a look at RAM. image

These tests were performed on a local network on two of my computers. Ports on the network card are 1Gbps so as you can see these results are not correct.

I saw a similar problem when I was measuring to an external server outside my local network.

Could the problem be with some configuration of my operating system? Of course I don't use VPN, proxy etc., I closed other apps. In the task manager, this speed is not visible in the network card graph and values.

davidBar-On commented 4 years ago

@olekstomek, it seem as if TCP in your machines have unlimited (or huge) window size. From the information you sent it is clear that the transferred packets are buffered in the internal machine memory, so if there is no sending limitation the transfer rate is the rate of adding the data to memory.

I don't know I this may happen, as maximum window size on windows seem to be 1GB. The following may help to ckeck the configuration of the window-size: Description of Windows TCP features.

olekstomek commented 4 years ago

@davidBar-On Good and interesting point of view. I checked registry on my OS. There are no option for TcpWindowSize or something similar. Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters image

Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces image

image

image

image

image

image

image

image

image

image

Current global TCP settings in my OS.

Microsoft Windows [Version 10.0.18363.1171]
(c) 2019 Microsoft Corporation. All rights reserved.

C:\Users\tzok>netsh interface tcp show global
Querying active state...

TCP Global Parameters
----------------------------------------------
Receive-Side Scaling State          : enabled
Receive Window Auto-Tuning Level    : normal
Add-On Congestion Control Provider  : default
ECN Capability                      : disabled
RFC 1323 Timestamps                 : disabled
Initial RTO                         : 1000
Receive Segment Coalescing State    : enabled
Non Sack Rtt Resiliency             : disabled
Max SYN Retransmissions             : 4
Fast Open                           : enabled
Fast Open Fallback                  : enabled
HyStart                             : enabled
Pacing Profile                      : off

Speedtest.net CLI works fine on the same machine:

C:\Users\tzok\Downloads\ookla-speedtest-1.0.0-win64>speedtest

   Speedtest by Ookla

     Server: Orange Polska S.A. - Lodz (id = 4206)
        ISP: Toya sp.z.o.o
    Latency:     8.01 ms   (0.61 ms jitter)
   Download:   152.66 Mbps (data used: 140.8 MB)
     Upload:    24.51 Mbps (data used: 13.9 MB)
Packet Loss:     0.0%
 Result URL: https://www.speedtest.net/result/c/ebe48798-0077-45e7-a9ee-a18750a8479e
davidBar-On commented 4 years ago

@olekstomek, I don't really have any clue about what happens. Few things to consider/try:

  1. Was the iperf3 you built for Vista ("iPerf 3.1.3 (8 jun 2016 - 1.3 MiB for Windows Vista 64bits to Windows 10 64bits)")? If yes, maybe there are inconsistencies between Windows-10 and Vista?

  2. It will help if you can use Wireshark to log the data sent/received. It should allow to see the actual window size used, which may be helpful.

  3. Try to use the -w option to set the windows size manually, e.g. -w 256K. If this will change the behavior or not may help to understand the problem.

olekstomek commented 4 years ago

@davidBar-On,

  1. Was the iperf3 you built for Vista ("iPerf 3.1.3 (8 jun 2016 - 1.3 MiB for Windows Vista 64bits to Windows 10 64bits)")? If yes, maybe there are inconsistencies between Windows-10 and Vista?

Yes, but it was the newest version from here. I checked iPerf 3.1.3 on my Lenovo computer G50-80 with 12GB RAM, HDD and Windows 10 Home and it's OK:

Microsoft Windows [Version 10.0.18363.1139]
(c) 2019 Microsoft Corporation. Wszelkie prawa zastrzeżone.

E:\pobrane\iperf-3.1.3-win64\iperf-3.1.3-win64>iperf3 iperf3 -c 192.168.43.127
Connecting to host 192.168.43.127, port 5201
[  4] local 192.168.43.44 port 50130 connected to 192.168.43.127 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.01   sec  2.12 MBytes  17.6 Mbits/sec
[  4]   1.01-2.00   sec  2.00 MBytes  16.9 Mbits/sec
[  4]   2.00-3.00   sec  2.00 MBytes  16.8 Mbits/sec
[  4]   3.00-4.00   sec  1.88 MBytes  15.7 Mbits/sec
[  4]   4.00-5.00   sec  2.00 MBytes  16.8 Mbits/sec
[  4]   5.00-6.02   sec  1.62 MBytes  13.5 Mbits/sec
[  4]   6.02-7.00   sec  1.88 MBytes  15.9 Mbits/sec
[  4]   7.00-8.00   sec  2.00 MBytes  16.8 Mbits/sec
[  4]   8.00-9.01   sec  1.75 MBytes  14.6 Mbits/sec
[  4]   9.01-10.00  sec  1.88 MBytes  15.9 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  19.1 MBytes  16.0 Mbits/sec                  sender
[  4]   0.00-10.00  sec  19.1 MBytes  16.0 Mbits/sec                  receiver

iperf Done.

E:\pobrane\iperf-3.1.3-win64\iperf-3.1.3-win64>iperf3 iperf3 -c 192.168.43.127 -R
Connecting to host 192.168.43.127, port 5201
Reverse mode, remote host 192.168.43.127 is sending
[  4] local 192.168.43.44 port 50139 connected to 192.168.43.127 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.01   sec  1.71 MBytes  14.3 Mbits/sec
[  4]   1.01-2.01   sec  1.81 MBytes  15.1 Mbits/sec
[  4]   2.01-3.01   sec  1.97 MBytes  16.6 Mbits/sec
[  4]   3.01-4.00   sec  1.85 MBytes  15.6 Mbits/sec
[  4]   4.00-5.00   sec  2.10 MBytes  17.6 Mbits/sec
[  4]   5.00-6.00   sec  2.11 MBytes  17.7 Mbits/sec
[  4]   6.00-7.00   sec  1.97 MBytes  16.5 Mbits/sec
[  4]   7.00-8.00   sec  1.95 MBytes  16.4 Mbits/sec
[  4]   8.00-9.00   sec  1.83 MBytes  15.3 Mbits/sec
[  4]   9.00-10.00  sec  1.80 MBytes  15.1 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  19.2 MBytes  16.1 Mbits/sec                  sender
[  4]   0.00-10.00  sec  19.2 MBytes  16.1 Mbits/sec                  receiver

iperf Done.

Additionally, I tried version 3.9 from this post

https://files.budman.pw/iperf3.9_64.zip
Name: iperf3.9_64.zip
Size: 1542276 bytes (1506 KiB)
SHA256: 15D2D3C2A8B9A69EFD9991FEFE5206E31D6055399F7A4C663C3CB6D77B6770F8

(Ok, I'm a bit lazy and I didn't compile the version on my computer from source but I believe this version is fine)

C:\Users\tzok\Downloads\iperf3.9_64>iperf3 -c 192.168.43.26
Connecting to host 192.168.43.26, port 5201
[  5] local 192.168.43.127 port 51662 connected to 192.168.43.26 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  2.17 GBytes  18.7 Gbits/sec
[  5]   1.00-2.00   sec  2.06 GBytes  17.7 Gbits/sec
[  5]   2.00-3.00   sec  2.01 GBytes  17.3 Gbits/sec
[  5]   3.00-4.00   sec  2.11 GBytes  18.1 Gbits/sec
[  5]   4.00-5.00   sec  2.05 GBytes  17.6 Gbits/sec
[  5]   4.00-5.00   sec  2.05 GBytes  17.6 Gbits/sec
iperf3: error - unable to write to stream socket: Broken pipe

C:\Users\tzok\Downloads\iperf3.9_64>iperf3 -c 192.168.43.26 -w 256K
Connecting to host 192.168.43.26, port 5201
[  5] local 192.168.43.127 port 51664 connected to 192.168.43.26 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  2.12 GBytes  18.2 Gbits/sec
[  5]   1.00-2.00   sec  2.10 GBytes  18.0 Gbits/sec
[  5]   2.00-3.00   sec  2.03 GBytes  17.4 Gbits/sec
[  5]   3.00-4.00   sec  2.09 GBytes  18.0 Gbits/sec
[  5]   4.00-5.00   sec  2.09 GBytes  17.9 Gbits/sec
[  5]   4.00-5.00   sec  2.09 GBytes  17.9 Gbits/sec
iperf3: error - unable to write to stream socket: Broken pipe

C:\Users\tzok\Downloads\iperf3.9_64>iperf3 -c 192.168.43.26 -t4
Connecting to host 192.168.43.26, port 5201
[  5] local 192.168.43.127 port 51715 connected to 192.168.43.26 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  2.08 GBytes  17.9 Gbits/sec
[  5]   1.00-2.00   sec  2.11 GBytes  18.1 Gbits/sec
[  5]   2.00-3.00   sec  1.97 GBytes  16.9 Gbits/sec
[  5]   3.00-4.00   sec  2.07 GBytes  17.7 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-4.00   sec  8.24 GBytes  17.7 Gbits/sec                  sender
[  5]   0.00-4.13   sec  10.7 MBytes  21.7 Mbits/sec                  receiver

iperf Done.

C:\Users\tzok\Downloads\iperf3.9_64>iperf3 -c 192.168.43.26 -t4 -w 256K
Connecting to host 192.168.43.26, port 5201
[  5] local 192.168.43.127 port 51717 connected to 192.168.43.26 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  2.20 GBytes  18.9 Gbits/sec
[  5]   1.00-2.00   sec  2.10 GBytes  18.1 Gbits/sec
[  5]   2.00-3.00   sec  2.07 GBytes  17.8 Gbits/sec
[  5]   3.00-4.00   sec  2.13 GBytes  18.3 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-4.00   sec  8.50 GBytes  18.3 Gbits/sec                  sender
[  5]   0.00-4.04   sec  16.9 MBytes  35.1 Mbits/sec                  receiver
  • It will help if you can use Wireshark to log the data sent/received. It should allow to see the actual window size used, which may be helpful.

With -w256K: image

Default: image

  • Try to use the -w option to set the windows size manually, e.g. -w 256K. If this will change the behavior or not may help to understand the problem.
Microsoft Windows [Version 10.0.18363.1198]
(c) 2019 Microsoft Corporation. All rights reserved.

C:\Users\tzok\Downloads\iperf-3.1.3-win64>iperf3 -c 192.168.43.26 -w 256K
Connecting to host 192.168.43.26, port 5201
[  4] local 192.168.43.127 port 51610 connected to 192.168.43.26 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  2.19 GBytes  18.8 Gbits/sec
[  4]   1.00-2.00   sec  2.16 GBytes  18.6 Gbits/sec
[  4]   2.00-3.00   sec  2.08 GBytes  17.9 Gbits/sec
[  4]   3.00-4.00   sec  2.03 GBytes  17.5 Gbits/sec

C:\Users\tzok\Downloads\iperf-3.1.3-win64>iperf3 -c 192.168.43.26 -w 256K -R
Connecting to host 192.168.43.26, port 5201
Reverse mode, remote host 192.168.43.26 is sending
[  4] local 192.168.43.127 port 51613 connected to 192.168.43.26 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  2.38 MBytes  19.9 Mbits/sec
[  4]   1.00-2.00   sec  2.49 MBytes  21.0 Mbits/sec
[  4]   2.00-3.00   sec  2.49 MBytes  20.8 Mbits/sec
[  4]   3.00-4.00   sec  2.36 MBytes  19.8 Mbits/sec
[  4]   4.00-5.00   sec  2.21 MBytes  18.5 Mbits/sec
[  4]   5.00-6.00   sec  2.29 MBytes  19.2 Mbits/sec
[  4]   6.00-7.00   sec  2.14 MBytes  18.0 Mbits/sec
[  4]   7.00-8.00   sec  2.39 MBytes  20.1 Mbits/sec
[  4]   8.00-9.00   sec  2.36 MBytes  19.8 Mbits/sec
[  4]   9.00-10.00  sec  2.28 MBytes  19.1 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  23.5 MBytes  19.7 Mbits/sec                  sender
[  4]   0.00-10.00  sec  23.5 MBytes  19.7 Mbits/sec                  receiver

iperf Done.

C:\Users\tzok\Downloads\iperf-3.1.3-win64>iperf3 -c 192.168.43.26 -P4 -t4 -w256K -d
send_parameters:
{
        "tcp":  true,
        "omit": 0,
        "time": 4,
        "parallel":     4,
        "window":       262144,
        "len":  131072,
        "client_version":       "3.1.3"
}
Connecting to host 192.168.43.26, port 5201
SO_SNDBUF is 262144
[  4] local 192.168.43.127 port 51835 connected to 192.168.43.26 port 5201
SO_SNDBUF is 262144
[  6] local 192.168.43.127 port 51836 connected to 192.168.43.26 port 5201
SO_SNDBUF is 262144
[  8] local 192.168.43.127 port 51837 connected to 192.168.43.26 port 5201
SO_SNDBUF is 262144
[ 10] local 192.168.43.127 port 51838 connected to 192.168.43.26 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   511 MBytes  4.29 Gbits/sec
[  6]   0.00-1.00   sec   511 MBytes  4.29 Gbits/sec
[  8]   0.00-1.00   sec   511 MBytes  4.29 Gbits/sec
[ 10]   0.00-1.00   sec   511 MBytes  4.29 Gbits/sec
[SUM]   0.00-1.00   sec  2.00 GBytes  17.1 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[  4]   1.00-2.00   sec   524 MBytes  4.39 Gbits/sec
[  6]   1.00-2.00   sec   524 MBytes  4.39 Gbits/sec
[  8]   1.00-2.00   sec   524 MBytes  4.39 Gbits/sec
[ 10]   1.00-2.00   sec   524 MBytes  4.39 Gbits/sec
[SUM]   1.00-2.00   sec  2.05 GBytes  17.6 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[  4]   2.00-3.00   sec   489 MBytes  4.10 Gbits/sec
[  6]   2.00-3.00   sec   489 MBytes  4.10 Gbits/sec
[  8]   2.00-3.00   sec   489 MBytes  4.10 Gbits/sec
[ 10]   2.00-3.00   sec   489 MBytes  4.10 Gbits/sec
[SUM]   2.00-3.00   sec  1.91 GBytes  16.4 Gbits/sec
send_results
{
        "cpu_util_total":       98.322061,
        "cpu_util_user":        2.268408,
        "cpu_util_system":      96.053653,
        "sender_has_retransmits":       0,
        "streams":      [{
                        "id":   1,
                        "bytes":        2117730304,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }, {
                        "id":   3,
                        "bytes":        2119434240,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }, {
                        "id":   4,
                        "bytes":        2119434240,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }, {
                        "id":   5,
                        "bytes":        2119434240,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }]
}
get_results
{
        "cpu_util_total":       0.668928,
        "cpu_util_user":        0.501696,
        "cpu_util_system":      0.167232,
        "sender_has_retransmits":       -1,
        "streams":      [{
                        "id":   1,
                        "bytes":        4633131,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }, {
                        "id":   3,
                        "bytes":        5106635,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }, {
                        "id":   4,
                        "bytes":        5064085,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }, {
                        "id":   5,
                        "bytes":        5053908,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0
                }]
}
- - - - - - - - - - - - - - - - - - - - - - - - -
[  4]   3.00-4.00   sec   496 MBytes  4.17 Gbits/sec
[  6]   3.00-4.00   sec   498 MBytes  4.18 Gbits/sec
[  8]   3.00-4.00   sec   498 MBytes  4.18 Gbits/sec
[ 10]   3.00-4.00   sec   498 MBytes  4.18 Gbits/sec
[SUM]   3.00-4.00   sec  1.94 GBytes  16.7 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-4.00   sec  1.97 GBytes  4.24 Gbits/sec                  sender
[  4]   0.00-4.00   sec  4.42 MBytes  9.27 Mbits/sec                  receiver
[  6]   0.00-4.00   sec  1.97 GBytes  4.24 Gbits/sec                  sender
[  6]   0.00-4.00   sec  4.87 MBytes  10.2 Mbits/sec                  receiver
[  8]   0.00-4.00   sec  1.97 GBytes  4.24 Gbits/sec                  sender
[  8]   0.00-4.00   sec  4.83 MBytes  10.1 Mbits/sec                  receiver
[ 10]   0.00-4.00   sec  1.97 GBytes  4.24 Gbits/sec                  sender
[ 10]   0.00-4.00   sec  4.82 MBytes  10.1 Mbits/sec                  receiver
[SUM]   0.00-4.00   sec  7.89 GBytes  17.0 Gbits/sec                  sender
[SUM]   0.00-4.00   sec  18.9 MBytes  39.7 Mbits/sec                  receiver

iperf Done.

iPerf 2.0.9 (6 jun 2016 - 1.7 MiB for Windows Vista 64bits to Windows 10 64bits)

Microsoft Windows [Version 10.0.18363.1198]
(c) 2019 Microsoft Corporation. All rights reserved.

C:\Users\tzok\Downloads\iperf-2.0.9-win64>iperf -c 192.168.43.26
------------------------------------------------------------
Client connecting to 192.168.43.26, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 192.168.43.127 port 51797 connected with 192.168.43.26 port 5001
write failed: Broken pipe
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 5.0 sec  10.9 GBytes  18.7 Gbits/sec

C:\Users\tzok\Downloads\iperf-2.0.9-win64>iperf -c 192.168.43.26 -t4
------------------------------------------------------------
Client connecting to 192.168.43.26, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 192.168.43.127 port 51798 connected with 192.168.43.26 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 4.0 sec  8.91 GBytes  19.1 Gbits/sec

When I check the information in the console on the laptop where the server is running, these are much smaller and real transfers at the level of several dozen Mbps (I don't include, so as not to impede the readability of the results from the client laptop).

So you can see that the problem still exist. My goal is to show the problem in the application. Even if my operating system has a unique configuration, iPerf should not show false results. Should be resistant.

davidBar-On commented 4 years ago

@olekstomek, until we will understand the root cause of the problem it will not be possible to determine whether the problem is in the application. In any case you provided good data so I hope it will be possible to determine the root cause.

Some observations from the data you sent:

  1. The issue is not related to the iperf3 build, as it also happens in both iperf2 and new version of iperf3.
  2. The TCP stack is working properly. The difference between the send and received sequence numbers is about 220MB, which is within the window size.
  3. As expected, the client terminates with error unable to write to stream socket: broken pipe after sending 10-12GB, which based on the original data you sent is probably because the computer memory got full.

Based on this, my current guess is that the problem is related to the network buffers which are the input to the TCP stack. It may be that somehow the network buffers size was set to a very big size - beyond the memory available, although I don't know how this can happen. Another option is that some kind of NIC Offload processing is defined, but that should not use the internal memory.

In any case, following are some things that can be done to further evaluate the issue:

  1. Under Windows shell, run Get-NetTCPSetting, Get-NetIPConfiguration and send their output.

  2. Run Network-manager-> Network-adapters and open the relevant adapter. Send what is configured in the advanced tab for the relevant network adapter for transmit buffers, jumbo frame and large send offload (xxx) (or any other parameter that you think is useful).

  3. Diagnose and troubleshoot the network interface as described here. If this will somehow fix the problem, please send the logs so we will be able to understand what caused it.

olekstomek commented 4 years ago

@davidBar-On

until we will understand the root cause of the problem it will not be possible to determine whether the problem is in the application.

Yes you are right. I made such a thesis because, for example, the speedtest.net CLI works properly or other speed tests on web pages eg.: this or this (maybe the web cannot be compared with an application running on an OS without a browser).

3. As expected, the client terminates with error unable to write to stream socket: broken pipe after sending 10-12GB, which based on the original data you sent is probably because the computer memory got full.

Aside from the main thread, I thought the operating system would dump data from RAM to SSD, but I'm guessing 2GB/s data packets are big and it happens too fast. Going back to the main thread, yes, that's why I limited the test time with -t4.

Based on this, my current guess is that the problem is related to the network buffers which are the input to the TCP stack. It may be that somehow the network buffers size was set to a very big size - beyond the memory available, although I don't know how this can happen. Another option is that some kind of NIC Offload processing is defined, but that should not use the internal memory.

As I checked before speedtest.net CLI works OK, I checked that they use TCP for the tests also.

Speedtest.net operates mainly over TCP testing with a HTTP fallback for maximum compatibility. Speedtest.net measures ping (latency), download speed and upload speed.

This word "mainly" probably doesn't mean always but I haven't found the option to set the TCP/UDP options in the CLI and the GUI of the application. But I checked by Wireshark and protokol is TCP. I did a test in the speedtest.net GUI application and it looks like the upload behavior is similar at the beginning and then the value is adjusted?

speedtest_net_upload

but not always - it's mobile connection in LTE: image Over 300Mbps in 4G/4G+ technology is impossible.

  1. Under Windows shell, run Get-NetTCPSetting, Get-NetIPConfiguration and send their output.

Ethernet adapter:

InterfaceAlias       : Ethernet
InterfaceIndex       : 4
InterfaceDescription : Intel(R) Ethernet Connection (10) I219-LM
NetAdapter.Status    : Disconnected

PS C:\WINDOWS\system32> Get-NetTCPSetting

SettingName                     : Automatic
MinRto(ms)                      :
InitialCongestionWindow(MSS)    :
CongestionProvider              :
CwndRestart                     :
DelayedAckTimeout(ms)           :
DelayedAckFrequency             :
MemoryPressureProtection        :
AutoTuningLevelLocal            :
AutoTuningLevelGroupPolicy      :
AutoTuningLevelEffective        :
EcnCapability                   :
Timestamps                      :
InitialRto(ms)                  :
ScalingHeuristics               :
DynamicPortRangeStartPort       :
DynamicPortRangeNumberOfPorts   :
AutomaticUseCustom              :
NonSackRttResiliency            :
ForceWS                         :
MaxSynRetransmissions           :
AutoReusePortRangeStartPort     :
AutoReusePortRangeNumberOfPorts :

SettingName                     : InternetCustom
MinRto(ms)                      : 300
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 40
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : DatacenterCustom
MinRto(ms)                      : 20
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 10
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : Compat
MinRto(ms)                      : 300
InitialCongestionWindow(MSS)    : 4
CongestionProvider              : NewReno
CwndRestart                     : False
DelayedAckTimeout(ms)           : 200
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : Datacenter
MinRto(ms)                      : 20
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 10
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : Internet
MinRto(ms)                      : 300
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 40
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

PS C:\WINDOWS\system32> Get-NetIPConfiguration

InterfaceAlias       : vEthernet (Default Switch)
InterfaceIndex       : 23
InterfaceDescription : Hyper-V Virtual Ethernet Adapter
IPv4Address          : 172.17.164.177
IPv4DefaultGateway   :
DNSServer            :

InterfaceAlias       : Ethernet
InterfaceIndex       : 4
InterfaceDescription : Intel(R) Ethernet Connection (10) I219-LM
NetProfile.Name      : toya637428657874_5GHz
IPv4Address          : 10.8.39.86
IPv4DefaultGateway   : 10.8.32.1
DNSServer            : 217.113.224.135
                       217.113.224.36

InterfaceAlias       : Bluetooth Network Connection
InterfaceIndex       : 10
InterfaceDescription : Bluetooth Device (Personal Area Network)
NetAdapter.Status    : Disconnected

WiFi adapter:

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

Try the new cross-platform PowerShell https://aka.ms/pscore6

PS C:\WINDOWS\system32> Get-NetTCPSetting

SettingName                     : Automatic
MinRto(ms)                      :
InitialCongestionWindow(MSS)    :
CongestionProvider              :
CwndRestart                     :
DelayedAckTimeout(ms)           :
DelayedAckFrequency             :
MemoryPressureProtection        :
AutoTuningLevelLocal            :
AutoTuningLevelGroupPolicy      :
AutoTuningLevelEffective        :
EcnCapability                   :
Timestamps                      :
InitialRto(ms)                  :
ScalingHeuristics               :
DynamicPortRangeStartPort       :
DynamicPortRangeNumberOfPorts   :
AutomaticUseCustom              :
NonSackRttResiliency            :
ForceWS                         :
MaxSynRetransmissions           :
AutoReusePortRangeStartPort     :
AutoReusePortRangeNumberOfPorts :

SettingName                     : InternetCustom
MinRto(ms)                      : 300
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 40
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : DatacenterCustom
MinRto(ms)                      : 20
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 10
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : Compat
MinRto(ms)                      : 300
InitialCongestionWindow(MSS)    : 4
CongestionProvider              : NewReno
CwndRestart                     : False
DelayedAckTimeout(ms)           : 200
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : Datacenter
MinRto(ms)                      : 20
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 10
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : Internet
MinRto(ms)                      : 300
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 40
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

PS C:\WINDOWS\system32> Get-NetIPConfiguration

InterfaceAlias       : vEthernet (Default Switch)
InterfaceIndex       : 23
InterfaceDescription : Hyper-V Virtual Ethernet Adapter
IPv4Address          : 172.17.164.177
IPv4DefaultGateway   :
DNSServer            :

InterfaceAlias       : Wi-Fi
InterfaceIndex       : 15
InterfaceDescription : Intel(R) Wi-Fi 6 AX201 160MHz
NetProfile.Name      : Tomek Mi9
IPv4Address          : 192.168.43.127
IPv4DefaultGateway   : 192.168.43.1
DNSServer            : 192.168.43.1

InterfaceAlias       : Bluetooth Network Connection
InterfaceIndex       : 10
InterfaceDescription : Bluetooth Device (Personal Area Network)
NetAdapter.Status    : Disconnected

InterfaceAlias       : Ethernet
InterfaceIndex       : 4
InterfaceDescription : Intel(R) Ethernet Connection (10) I219-LM
NetAdapter.Status    : Disconnected

2. Run Network-manager-> Network-adapters and open the relevant adapter. Send what is configured in the advanced tab for the relevant network adapter for transmit buffers, jumbo frame and large send offload (xxx) (or any other parameter that you think is useful).

On Ethernet adapter: Transmit Buffers is set to 512 Jumbo Packet (I don't have Jumbo Frame option) is set to Disabled Large Send Offload V2 (IPv4 and IPv6) is set to Disabled

As for the other settings, I've looked at them but it's hard for me to see their effect on the network performance. I came up with the idea that I would compare these settings with my Lenovo laptop on which the results are correct but on Windows Home I do not have the "Advanced" tab, so I will not check it.

On WiFi adapter I don't have these options: image image

3. Diagnose and troubleshoot the network interface as described here. If this will somehow fix the problem, please send the logs so we will be able to understand what caused it.

I did this option and re-tested on the local network but it is unchanged. As I mentioned in the first post, the problem occurs on two computers (one was a client and the other was a server, and then vice versa, and ~5Gbps vs ~18Gbps transfers may result from RAM - DDR3 vs DDR4). I performed the tests to an external server via cable as well as in a separate local network via WiFi where the router was another device. My main ISP provides the internet in such a configuration that the router I have at home is in bridge mode and I have no influence on the configuration of this network. That's why I also do tests on the network I create myself to exclude any other dependencies. But the problem is always there.

davidBar-On commented 4 years ago

@olekstomek,

Yes you are right. I made such a thesis because, for example, the speedtest.net CLI works properly or other speed tests on web pages eg.: this or this (maybe the web cannot be compared with an application running on an OS without a browser).

The explanation about how Speedtest works over TCP say that The client ... sends an initial chunk of data .... adjusts the chunk size and buffer size based on it to maximize usage of the network connection, and requests more data.. I understand from this that it is not sending data continuously during the 10 seconds test, as iperf3 does, but it sends the data in chunks. Therefore, it may never get the the point were the computer's memory is full. Ti check whether this is the case, can you monitor the commuter's memory usage while Speedtest is running?

Transmit Buffers is set to 512

As I don't see any problem with the different parameters settings, one more test that may help is to test the effect of changing the Transmit Buffers size. If the problem is related to the network buffers size, that will affect the test. Can you change the Transmit Buffers size to a small number, e.g. 10, and test the effect? I don't know what is "too small" so if nothing works with 10 try other value, e.g. 50. (It could also help trying some different values, but as I think reboot is required after the change, that may take too much time.)

olekstomek commented 4 years ago

@davidBar-On

The explanation about how Speedtest works over TCP say that The client ... sends an initial chunk of data .... adjusts the chunk size and buffer size based on it to maximize usage of the network connection, and requests more data.. I understand from this that it is not sending data continuously during the 10 seconds test, as iperf3 does, but it sends the data in chunks.

I haven't read it before. Ok, so the algorithm is different and we cannot compare the performance and behavior of these applications on the same operating system.

can you monitor the commuter's memory usage while Speedtest is running?

image I re-run the test twice and the memory is at the same level (7,7 GB used), i.e. subsequent tests do not increase the RAM consumption (even after closing the application, the memory has not been released but it keeps the level when it has grown). In task manager I don't see Speedtest applications that use RAM. Next observation from CLI Speedtest.net. Result is OK, RAM did't increase, but test was finished and after ~10 seconds or a little more I see that ethernet still send data: image Another strange thing - after these tests, I check the test in the GUI of the application again and everything is normal - the upload is not huge at the beginning and it doesn't decreases, it only keeps my real upload level of 25 Mbps. So as it should always work.

Additionally on one of my computer with Windows Enterprise I did network reset. network_reset The system rebooted, I checked the test using iperf3 and it's the same - huge upload.

As I don't see any problem with the different parameters settings, one more test that may help is to test the effect of changing the Transmit Buffers size. If the problem is related to the network buffers size, that will affect the test. Can you change the Transmit Buffers size to a small number, e.g. 10, and test the effect? I don't know what is "too small" so if nothing works with 10 try other value, e.g. 50. (It could also help trying some different values, but as I think reboot is required after the change, that may take too much time.)

I will do it but I have to create a local network by cable first (because Transmit Buffers size is not in the WiFi adapter but is in Ethernet). I tried to do a test to an external server but without success.

C:\Users\TZOK\Downloads\iperf-3.1.3-win64>iperf3 -c iperf.eenet.ee
iperf3: error - unable to connect to server: Connection timed out

C:\Users\TZOK\Downloads\iperf-3.1.3-win64>iperf3 -c iperf.eenet.ee -p 5204
iperf3: error - unable to connect to server: Connection timed out

C:\Users\TZOK\Downloads\iperf-3.1.3-win64>iperf3 -c speedtest.serverius.net -p 5002 -P 10 -4
iperf3: error - unable to receive control message: Connection reset by peer

C:\Users\TZOK\Downloads\iperf-3.1.3-win64>iperf3 -c speedtest.serverius.net
iperf3: error - unable to connect to server: Connection timed out
davidBar-On commented 4 years ago

@olekstomek, I still don't have a clue what may the the problem ...

I will do it but I have to create a local network by cable first (because Transmit Buffers size is not in the WiFi adapter but is in Ethernet).

Please also try the test before changing the Transmit Buffers size, in case the issue is related to the WiFi adapter. (By the way, did you make sure the latest WiFi driver is installed?)

olekstomek commented 4 years ago

@davidBar-On

Please also try the test before changing the Transmit Buffers size, in case the issue is related to the WiFi adapter.

Unfortunatelly nothing new using ethernet: Before changing Transmit buffers:

C:\Users\TZOK\Downloads\iperf-3.1.3-win64>iperf3 -c 192.168.1.237
Connecting to host 192.168.1.237, port 5201
[  4] local 192.168.1.172 port 51689 connected to 192.168.1.237 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   694 MBytes  5.81 Gbits/sec
[  4]   1.00-2.00   sec   605 MBytes  5.06 Gbits/sec
[  4]   2.00-3.00   sec   707 MBytes  5.95 Gbits/sec
[  4]   3.00-4.00   sec   704 MBytes  5.91 Gbits/sec
[  4]   4.00-5.00   sec   709 MBytes  5.94 Gbits/sec
[  4]   5.00-6.00   sec   707 MBytes  5.94 Gbits/sec
[  4]   6.00-7.00   sec   680 MBytes  5.70 Gbits/sec
[  4]   7.00-8.00   sec   666 MBytes  5.59 Gbits/sec
[  4]   8.00-9.00   sec   598 MBytes  5.01 Gbits/sec
[  4]   9.00-10.00  sec   632 MBytes  5.30 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  6.55 GBytes  5.62 Gbits/sec                  sender
[  4]   0.00-10.00  sec   113 MBytes  94.8 Mbits/sec                  receiver

iperf Done.

The valid range of transmit buffers is from 80 to 2048 in increments of 8.

Transmit buffers set to 80:

C:\Users\TZOK\Downloads\iperf-3.1.3-win64>iperf3 -c 192.168.1.237
Connecting to host 192.168.1.237, port 5201
[  4] local 192.168.1.172 port 50588 connected to 192.168.1.237 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   533 MBytes  4.46 Gbits/sec
[  4]   1.00-2.00   sec   510 MBytes  4.29 Gbits/sec
[  4]   2.00-3.00   sec   605 MBytes  5.07 Gbits/sec
[  4]   3.00-4.00   sec   692 MBytes  5.82 Gbits/sec
[  4]   4.00-5.00   sec   697 MBytes  5.84 Gbits/sec
[  4]   5.00-6.00   sec   724 MBytes  6.07 Gbits/sec
[  4]   6.00-7.00   sec   723 MBytes  6.06 Gbits/sec
[  4]   7.00-8.00   sec   706 MBytes  5.92 Gbits/sec
[  4]   8.00-9.00   sec   650 MBytes  5.45 Gbits/sec
[  4]   9.00-10.00  sec   657 MBytes  5.51 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  6.34 GBytes  5.45 Gbits/sec                  sender
[  4]   0.00-10.00  sec   113 MBytes  94.8 Mbits/sec                  receiver

iperf Done.

Transmit buffers set to 1024:

C:\Users\TZOK\Downloads\iperf-3.1.3-win64>iperf3 -c 192.168.1.237
Connecting to host 192.168.1.237, port 5201
[  4] local 192.168.1.172 port 50701 connected to 192.168.1.237 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   648 MBytes  5.44 Gbits/sec
[  4]   1.00-2.00   sec   671 MBytes  5.63 Gbits/sec
[  4]   2.00-3.00   sec   668 MBytes  5.61 Gbits/sec
[  4]   3.00-4.00   sec   658 MBytes  5.52 Gbits/sec
[  4]   4.00-5.00   sec   622 MBytes  5.22 Gbits/sec
[  4]   5.00-6.00   sec   648 MBytes  5.44 Gbits/sec
[  4]   6.00-7.00   sec   658 MBytes  5.52 Gbits/sec
[  4]   7.00-8.00   sec   606 MBytes  5.08 Gbits/sec
[  4]   8.00-9.00   sec   598 MBytes  5.02 Gbits/sec
[  4]   9.00-10.00  sec   614 MBytes  5.15 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  6.24 GBytes  5.36 Gbits/sec                  sender
[  4]   0.00-10.00  sec   113 MBytes  94.8 Mbits/sec                  receiver

iperf Done.

Transmit buffers set to 2048:

C:\Users\TZOK\Downloads\iperf-3.1.3-win64>iperf3 -c 192.168.1.237
Connecting to host 192.168.1.237, port 5201
[  4] local 192.168.1.172 port 50819 connected to 192.168.1.237 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   634 MBytes  5.32 Gbits/sec
[  4]   1.00-2.00   sec   664 MBytes  5.56 Gbits/sec
[  4]   2.00-3.00   sec   669 MBytes  5.62 Gbits/sec
[  4]   3.00-4.00   sec   676 MBytes  5.67 Gbits/sec
[  4]   4.00-5.00   sec   664 MBytes  5.56 Gbits/sec
[  4]   5.00-6.00   sec   630 MBytes  5.29 Gbits/sec
[  4]   6.00-7.00   sec   654 MBytes  5.47 Gbits/sec
[  4]   7.00-8.00   sec   629 MBytes  5.29 Gbits/sec
[  4]   8.00-9.00   sec   592 MBytes  4.97 Gbits/sec
[  4]   9.00-10.00  sec   579 MBytes  4.85 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  6.24 GBytes  5.36 Gbits/sec                  sender
[  4]   0.00-10.00  sec   113 MBytes  94.8 Mbits/sec                  receiver

iperf Done.

By the way, did you make sure the latest WiFi driver is installed?

On one of the computers with the problem, I am using Dell Command | Update v 4.0.0 and everything is up to date. On another computer I see that the best drivers for your device are already installed - it's about Intel(R) Wi-Fi... Additionally, see that the problem also occurs on ethernet.

It is very difficult to guess what could be causing these false results.

davidBar-On commented 4 years ago

@olekstomek,

Unfortunatelly nothing new using ethernet

Just to make sure, did you check the log at the server side to see what is the actual throughput? I would expect that it will be higher over Ethernet.

It is very difficult to guess what could be causing these false results.

I agree ... Following are some more actions that can be taken:

  1. Try using UDP (-u) with bandwidth of 100Mbps (-b 100M), and log the computer's memory consumption. If the issue is not related to TCP, then because the actual throughput is only 25Mbps, the memory allocated will grow significantly, instead of just packets getting lost.

  2. Run the following commands under Windows shell. Maybe something will come up:

    • netsh int tcp show global
    • netsh int ipv4 show
    • netsh int ipv4 show offload
    • netsh int ipv4 show tcpstats
    • netsh int tcp show chimneystats

If nothing of this will help, then I think the next step (if you will be willing to spend the effort) is to write small client/server programs to see whether the way iperf3 is working with the socket is problematic on these computers (e.g. the use of Nwrite()).

olekstomek commented 4 years ago

@davidBar-On

Just to make sure, did you check the log at the server side to see what is the actual throughput? I would expect that it will be higher over Ethernet.

Yes, I observed on server side actual throughput and it was about 100Mbps (LAN in my router which I used for tests has ports 100Mbps).

C:\Users\tzok\Downloads\iperf-3.1.3-win64>iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.172, port 50478
[  5] local 192.168.1.237 port 5201 connected to 192.168.1.172 port 50479
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec  10.6 MBytes  88.7 Mbits/sec
[  5]   1.00-2.00   sec  11.3 MBytes  94.8 Mbits/sec
[  5]   2.00-3.00   sec  11.3 MBytes  94.4 Mbits/sec
[  5]   3.00-4.00   sec  11.3 MBytes  94.7 Mbits/sec
[  5]   4.00-5.00   sec  11.3 MBytes  94.7 Mbits/sec
[  5]   5.00-6.00   sec  11.3 MBytes  94.7 Mbits/sec
[  5]   6.00-7.00   sec  11.3 MBytes  94.7 Mbits/sec
[  5]   7.00-8.00   sec  11.3 MBytes  94.7 Mbits/sec
[  5]   8.00-9.00   sec  11.3 MBytes  94.7 Mbits/sec
[  5]   9.00-10.00  sec  11.3 MBytes  94.7 Mbits/sec
[  5]  10.00-11.00  sec  11.3 MBytes  94.7 Mbits/sec
[  5]  11.00-11.30  sec  3.43 MBytes  94.7 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-11.30  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-11.30  sec   127 MBytes  94.2 Mbits/sec                  receiver

Try using UDP (-u) with bandwidth of 100Mbps (-b 100M), and log the computer's memory consumption. If the issue is not related to TCP, then because the actual throughput is only 25Mbps, the memory allocated will grow significantly, instead of just packets getting lost.

All tests here I did on ethernet.

Usig UDP:

C:\Users\TZOK\Downloads\iperf-3.1.3-win64>iperf3 -c 192.168.1.237 -u -b 100M
Connecting to host 192.168.1.237, port 5201
[  4] local 192.168.1.172 port 64923 connected to 192.168.1.237 port 5201
[ ID] Interval           Transfer     Bandwidth       Total Datagrams
[  4]   0.00-1.00   sec  10.5 MBytes  87.8 Mbits/sec  1341
[  4]   1.00-2.00   sec  11.4 MBytes  95.6 Mbits/sec  1459
[  4]   2.00-3.00   sec  11.4 MBytes  95.6 Mbits/sec  1458
[  4]   3.00-4.00   sec  11.4 MBytes  95.6 Mbits/sec  1460
[  4]   4.00-5.00   sec  11.4 MBytes  95.6 Mbits/sec  1459
[  4]   5.00-6.00   sec  11.4 MBytes  95.6 Mbits/sec  1459
[  4]   6.00-7.00   sec  11.4 MBytes  95.6 Mbits/sec  1459
[  4]   7.00-8.00   sec  11.4 MBytes  95.6 Mbits/sec  1458
[  4]   8.00-9.00   sec  11.4 MBytes  95.6 Mbits/sec  1459
[  4]   9.00-10.00  sec  11.4 MBytes  95.6 Mbits/sec  1459
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec   113 MBytes  94.8 Mbits/sec  0.284 ms  0/14470 (0%)
[  4] Sent 14470 datagrams

iperf Done.

I used also:

iperf3 -c 192.168.1.237 -u -b 100M -t 30
.
.
.
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-30.00  sec   341 MBytes  95.4 Mbits/sec  0.290 ms  0/43650 (0%)
[  4] Sent 43650 datagrams

and memory is the same (that's why I changed the name of the topic and made it clear that it was about TCP). image

  • netsh int tcp show global
C:\Users\TZOK>netsh int tcp show global
Querying active state...

TCP Global Parameters
----------------------------------------------
Receive-Side Scaling State          : enabled
Receive Window Auto-Tuning Level    : normal
Add-On Congestion Control Provider  : default
ECN Capability                      : disabled
RFC 1323 Timestamps                 : disabled
Initial RTO                         : 1000
Receive Segment Coalescing State    : enabled
Non Sack Rtt Resiliency             : disabled
Max SYN Retransmissions             : 4
Fast Open                           : enabled
Fast Open Fallback                  : enabled
HyStart                             : enabled
Pacing Profile                      : off
  • netsh int ipv4 show
C:\Users\TZOK>netsh int ipv4 show

The following commands are available:

Commands in this context:
show addresses - Shows IP address configurations.
show compartments - Shows compartment parameters.
show config    - Displays IP address and additional information.
show destinationcache - Shows destination cache entries.
show dnsservers - Displays the DNS server addresses.
show dynamicportrange - Shows dynamic port range configuration parameters.
show excludedportrange - Shows all excluded port ranges.
show global    - Shows global configuration parameters.
show icmpstats - Displays ICMP statistics.
show interfaces - Shows interface parameters.
show ipaddresses - Shows current IP addresses.
show ipnettomedia - Displays IP net-to-media mappings.
show ipstats   - Displays IP statistics.
show joins     - Displays multicast groups joined.
show neighbors - Shows neighbor cache entries.
show offload   - Displays the offload information.
show route     - Shows route table entries.
show subinterfaces - Shows subinterface parameters.
show tcpconnections - Displays TCP connections.
show tcpstats  - Displays TCP statistics.
show udpconnections - Displays UDP connections.
show udpstats  - Displays UDP statistics.
show winsservers - Displays the WINS server addresses.
  • netsh int ipv4 show offload
C:\Users\TZOK>netsh int ipv4 show offload

Interface 1: Loopback Pseudo-Interface 1

Interface 10: Local Area Connection* 1

Interface 17: Bluetooth Network Connection

Interface 24: Ethernet

Interface 36: vEthernet (Default Switch)
  • netsh int ipv4 show tcpstats
C:\Users\TZOK>netsh int ipv4 show tcpstats

TCP Statistics
------------------------------------------------------
Timeout Algorithm:                      Van Jacobson's Algorithm
Minimum Timeout:                        5
Maximum Timeout:                        4294967295
Maximum Connections:                    Dynamic
Active Opens:                           28
Passive Opens:                          13
Attempts Failed:                        9
Established Resets:                     4
Currently Established:                  0
In Segments:                            45872
Out Segments:                           91620
Retransmitted Segments:                 37
In Errors:                              0
Out Resets:                             9
Fastopen Active Opens:                  0
Fastopen Passive Opens:                 0
Fastopen Attempts Failed:               0
Retransmits Of First SYN:               9
Retransmits Of First SYN (Fastopen):    0
  • netsh int tcp show chimneystats
C:\Users\TZOK>netsh int tcp show chimneystats
The following command was not found: int tcp show chimneystats.

C:\Users\tzok>netsh int tcp show

The following commands are available:

Commands in this context:
show global    - Shows global TCP parameters.
show heuristics - Shows heuristics TCP parameters.
show rscstats  - Shows TCP statistics for Receive Segment Coalescing-capable interfaces.
show security  - Shows TCP security parameters.
show supplemental - Shows supplemental template based TCP parameters.
show supplementalports - Shows port tuples in the TCP supplemental filter table.
show supplementalsubnets - Shows destination subnets in the TCP supplemental filter table.

If nothing of this will help, then I think the next step (if you will be willing to spend the effort) is to write small client/server programs to see whether the way iperf3 is working with the socket is problematic on these computers (e.g. the use of Nwrite()).

We can check, but I need more details.

davidBar-On commented 4 years ago

@olekstomek,

... and memory is the same (that's why I changed the name of the topic and made it clear that it was about TCP).

As you are using the 100Mbps interface, the -b 100M is not big enough. Can you retry the UDP test with -b 500M (or even -b 1G) and see the what is the throughput and the the effect on the memory allocation?

olekstomek commented 4 years ago

@davidBar-On

Can you retry the UDP test with -b 500M (or even -b 1G) and see the what is the throughput and the the effect on the memory allocation?

Take a look: udp_500M_1GB First was -b 500M, second -b 1G, next I did -b 2G. RAM is stable. On server side bandwidth is almost 100Mbps (~95Mbps - as before).

davidBar-On commented 4 years ago

@olekstomek, I don't have suggestions for further testing, so the next step may be to see if changing the way iperf3 send data solve the issue. If you know how to build iperf3 under windows this is great. Otherwise we will have to find small server/client code that can be used for that.

What I think should be checked first is if there is a problem in the way iperf3 sends data. For TCP this is done in iperf_tcp_send():

    r = Nwrite(sp->socket, sp->buffer, sp->settings->blksize, Ptcp);

At least under Linux, Nwrite() uses write(), which as I understand is using _write() from the Windows POSIX library. What I suggest to check first is if using _write() directly, instead of Nwrite(), helps. If not then maybe Nwrite() should be changed to send() or any other method available in Windows (does not have to be compatible with Linux or any other OS).

olekstomek commented 4 years ago

@davidBar-On

If you know how to build iperf3 under windows this is great.

I didn't know but I did it (btw, I think issues https://github.com/esnet/iperf/issues/1067 and https://github.com/esnet/iperf/issues/1073 can be closed, the documentation for me was sufficient and simple - the configuration of the environment was the most difficult one, but this is a separate problem, the instruction tells how to compile the code on a ready environment). I downloaded iperf-3.9.tar.gz 17-Aug-2020 18:29 622459 from here. After compilation (first, I compiled and run without changes and problem still exist): image I run iperf3.exe. Compilation version (client):

C:\cygwin64\iperf-3.9\iperf-3.9\src>iperf3 -v
iperf 3.9 (cJSON 1.7.13)
CYGWIN_NT-10.0-18363 [mycomputername] 3.1.7-340.x86_64 2020-08-22 17:48 UTC x86_64
Optional features available: CPU affinity setting

Version of iperf3 on server:

C:\Users\tzok\Downloads\iperf3.9_64>iperf3 -v
iperf 3.9 (cJSON 1.7.13)
CYGWIN_NT-10.0-18363 [mycomputername] 3.1.6-340.x86_64 2020-07-09 08:20 UTC x86_64
Optional features available: CPU affinity setting

I changed this line in https://github.com/esnet/iperf/blob/bd1437791a63579d589e9bea7de9250a876a5c97/src/iperf_tcp.c#L91 to:

r = send(sp->socket, sp->buffer, sp->settings->blksize, Ptcp);

It compiled. I run and:

C:\cygwin64\iperf-3.9\iperf-3.9\src>iperf3 -c 192.168.1.237
Connecting to host 192.168.1.237, port 5201
[  5] local 192.168.1.172 port 55766 connected to 192.168.1.237 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   520 MBytes  4.36 Gbits/sec
[  5]   1.00-2.00   sec   585 MBytes  4.91 Gbits/sec
[  5]   2.00-3.00   sec   617 MBytes  5.18 Gbits/sec
[  5]   3.00-4.00   sec   642 MBytes  5.38 Gbits/sec
[  5]   4.00-5.00   sec   532 MBytes  4.46 Gbits/sec
[  5]   5.00-6.00   sec   507 MBytes  4.23 Gbits/sec
[  5]   6.00-7.00   sec   501 MBytes  4.22 Gbits/sec
[  5]   7.00-8.00   sec   540 MBytes  4.53 Gbits/sec
[  5]   8.00-9.00   sec   429 MBytes  3.60 Gbits/sec
[  5]   9.00-10.00  sec   473 MBytes  3.96 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  5.22 GBytes  4.48 Gbits/sec                  sender
[  5]   0.00-10.25  sec   115 MBytes  94.0 Mbits/sec                  receiver

iperf Done.

I changed this line in https://github.com/esnet/iperf/blob/bd1437791a63579d589e9bea7de9250a876a5c97/src/iperf_tcp.c#L91 to:

r = _write(sp->socket, sp->buffer, sp->settings->blksize, Ptcp);

and I got:

Making all in src
make[1]: Entering directory '/iperf-3.9/iperf-3.9/src'
make  all-am
make[2]: Entering directory '/iperf-3.9/iperf-3.9/src'
  CC       iperf_api.lo
  CC       iperf_locale.lo
  CC       iperf_tcp.lo
iperf_tcp.c: In function ‘iperf_tcp_send’:
iperf_tcp.c:91:6: warning: implicit declaration of function ‘_write’; did you mean ‘Nwrite’? [-Wimplicit-function-declaration]
   91 |  r = _write(sp->socket, sp->buffer, sp->settings->blksize, Ptcp);
      |      ^~~~~~
      |      Nwrite
  CCLD     libiperf.la
libtool: warning: undefined symbols not allowed in x86_64-unknown-cygwin shared libraries; building static only
  CCLD     iperf3.exe
/usr/lib/gcc/x86_64-pc-cygwin/10/../../../../x86_64-pc-cygwin/bin/ld: ./.libs/libiperf.a(iperf_tcp.o): in function `iperf_tcp_send':
/iperf-3.9/iperf-3.9/src/iperf_tcp.c:91: undefined reference to `_write'
/iperf-3.9/iperf-3.9/src/iperf_tcp.c:91:(.text+0x10f): relocation truncated to fit: R_X86_64_PC32 against undefined symbol `_write'
collect2: error: ld returned 1 exit status
make[2]: *** [Makefile:859: iperf3.exe] Error 1
make[2]: Leaving directory '/iperf-3.9/iperf-3.9/src'
make[1]: *** [Makefile:710: all] Error 2
make[1]: Leaving directory '/iperf-3.9/iperf-3.9/src'
make: *** [Makefile:387: all-recursive] Error 1

I changed this line in https://github.com/esnet/iperf/blob/bd1437791a63579d589e9bea7de9250a876a5c97/src/iperf_tcp.c#L91 to:

r = write(sp->socket, sp->buffer, sp->settings->blksize, Ptcp);

and I got:

Making all in src
make[1]: Entering directory '/iperf-3.9/iperf-3.9/src'
make  all-am
make[2]: Entering directory '/iperf-3.9/iperf-3.9/src'
  CC       iperf_api.lo
  CC       iperf_locale.lo
  CC       iperf_tcp.lo
iperf_tcp.c: In function ‘iperf_tcp_send’:
iperf_tcp.c:91:6: error: too many arguments to function ‘write’
   91 |  r = write(sp->socket, sp->buffer, sp->settings->blksize, Ptcp);
      |      ^~~~~
In file included from /usr/include/unistd.h:4,
                 from iperf_tcp.c:31:
/usr/include/sys/unistd.h:245:25: note: declared here
  245 | _READ_WRITE_RETURN_TYPE write (int __fd, const void *__buf, size_t __nbyte);
      |                         ^~~~~
make[2]: *** [Makefile:954: iperf_tcp.lo] Error 1
make[2]: Leaving directory '/iperf-3.9/iperf-3.9/src'
make[1]: *** [Makefile:710: all] Error 2
make[1]: Leaving directory '/iperf-3.9/iperf-3.9/src'
make: *** [Makefile:387: all-recursive] Error 1

After every changing I did ./configure and then make. It seems to be still nothing new... This is crazy. :)

davidBar-On commented 4 years ago

@olekstomek,

If you know how to build iperf3 under windows this is great.

I didn't know but I did it

This should be very helpful!. As using send() didn't help, it seems that the issue is related to the network adapter. The following can be done to get more information:

  1. Run the client with the -d debug option, sending only one packet using -k 1. This is mainly to get the SNDBUF/RCVBUF network buffers size. Please sent the header of the debug information. It should be something like:

    $ ./src/iperf3 -c 127.0.0.1 -d -k 1
    send_parameters:
    {
    "tcp":  true,
    "omit": 0,
    "time": 0,
    "blockcount":   1,
    "parallel": 1,
    "len":  131072,
    "pacing_timer": 1000,
    "client_version":   "3.9+"
    }
    Connecting to host 127.0.0.1, port 5201
    SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=
    SNDBUF is 524288, expecting 0
    RCVBUF is 1048576, expecting 0
  2. After iperf_run_client() select() statement add the following debug code and run the client with -d (but without -k). Please send part of debug output after running several seconds that includes some "SELECT REAULT=...." lines. This will show if the Nwrite/send really succefully send all data without any limitation regarding the network buffers.

The debug code to add after the select():

    result = select(test->max_fd + 1, &read_set, &write_set, NULL, timeout);

        if (test->debug) {
                iperf_printf(test, "SELECT RESULT=%d, CNTL-FD=%d, READ-FDs-SET=", result, test->ctrl_sck);
                if (result > 0) {
                        int i;
                        for (i = 0; i < test->max_fd + 1; i++) {
                                if (FD_ISSET(i, &read_set))
                                        iperf_printf(test, "%d;", i);
                        }
                }
                iperf_printf(test, ", WRITE-FDs-SET=");
                if (result > 0) {
                        int i;
                        for (i = 0; i < test->max_fd + 1; i++) {
                                if (FD_ISSET(i, &write_set))
                                        iperf_printf(test, "%d;", i);
                        }
                }
                iperf_printf(test, "\n");
        }

The output should be something like:

SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 410910720
sent 0 bytes of 131072, total 410910720
sent 0 bytes of 131072, total 410910720
sent 0 bytes of 131072, total 410910720
sent 0 bytes of 131072, total 410910720
sent 0 bytes of 131072, total 410910720
sent 0 bytes of 131072, total 410910720
sent 0 bytes of 131072, total 410910720
sent 0 bytes of 131072, total 410910720
sent 131072 bytes of 131072, total 411041792
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 411172864
sent 0 bytes of 131072, total 411172864
sent 131072 bytes of 131072, total 411303936
sent 0 bytes of 131072, total 411303936
sent 0 bytes of 131072, total 411303936
sent 131072 bytes of 131072, total 411435008
sent 0 bytes of 131072, total 411435008
sent 0 bytes of 131072, total 411435008
sent 131072 bytes of 131072, total 411566080
sent 0 bytes of 131072, total 411566080
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 411697152
sent 131072 bytes of 131072, total 411828224
sent 0 bytes of 131072, total 411828224
sent 0 bytes of 131072, total 411828224
sent 131072 bytes of 131072, total 411959296
sent 0 bytes of 131072, total 411959296
sent 131072 bytes of 131072, total 412090368
sent 0 bytes of 131072, total 412090368
sent 0 bytes of 131072, total 412090368
sent 131072 bytes of 131072, total 412221440
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 412352512
olekstomek commented 4 years ago

@davidBar-On I ran the test on my localhost (client and server on one machine).

C:\cygwin64\iperf-3.9\iperf-3.9\src>iperf3 -c 127.0.0.1 -d -k 1
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=
send_parameters:
{
        "tcp":  true,
        "omit": 0,
        "time": 0,
        "blockcount":   1,
        "parallel":     1,
        "len":  131072,
        "pacing_timer": 1000,
        "client_version":       "3.9"
}
Connecting to host 127.0.0.1, port 5201
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=
SNDBUF is 65536, expecting 0
RCVBUF is 65536, expecting 0
[  5] local 127.0.0.1 port 50886 connected to 127.0.0.1 port 5201
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 131072
sent 131072 bytes of 131072, total 262144
sent 131072 bytes of 131072, total 393216
sent 131072 bytes of 131072, total 524288
sent 131072 bytes of 131072, total 655360
sent 131072 bytes of 131072, total 786432
sent 131072 bytes of 131072, total 917504
sent 131072 bytes of 131072, total 1048576
sent 131072 bytes of 131072, total 1179648
sent 131072 bytes of 131072, total 1310720
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
send_results
{
        "cpu_util_total":       58.665455508875517,
        "cpu_util_user":        0,
        "cpu_util_system":      58.665455508875517,
        "sender_has_retransmits":       0,
        "streams":      [{
                        "id":   1,
                        "bytes":        1310720,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0,
                        "start_time":   0,
                        "end_time":     0.010994
                }]
}
get_results
{
        "cpu_util_total":       0.04025146323460866,
        "cpu_util_user":        0,
        "cpu_util_system":      0.04025146323460866,
        "sender_has_retransmits":       -1,
        "streams":      [{
                        "id":   1,
                        "bytes":        1310710,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0,
                        "start_time":   0,
                        "end_time":     0.014344
                }]
}
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
interval_len 0.010994 bytes_transferred 1310720
interval forces keep
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-0.01   sec  1.25 MBytes   954 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-0.01   sec  1.25 MBytes   954 Mbits/sec                  sender
[  5]   0.00-0.01   sec  1.25 MBytes   731 Mbits/sec                  receiver

iperf Done.

and more output with SELECT_RESULT:

Connecting to host 127.0.0.1, port 5201
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=
SNDBUF is 65536, expecting 0
RCVBUF is 65536, expecting 0
[  5] local 127.0.0.1 port 50891 connected to 127.0.0.1 port 5201
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 131072
sent 131072 bytes of 131072, total 262144
sent 131072 bytes of 131072, total 393216
sent 131072 bytes of 131072, total 524288
sent 131072 bytes of 131072, total 655360
sent 131072 bytes of 131072, total 786432
sent 131072 bytes of 131072, total 917504
sent 131072 bytes of 131072, total 1048576
sent 131072 bytes of 131072, total 1179648
sent 131072 bytes of 131072, total 1310720
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 1441792
sent 131072 bytes of 131072, total 1572864
sent 131072 bytes of 131072, total 1703936
sent 131072 bytes of 131072, total 1835008
sent 131072 bytes of 131072, total 1966080
sent 131072 bytes of 131072, total 2097152
sent 131072 bytes of 131072, total 2228224
sent 131072 bytes of 131072, total 2359296
sent 131072 bytes of 131072, total 2490368
sent 131072 bytes of 131072, total 2621440
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 2752512
sent 131072 bytes of 131072, total 2883584
sent 131072 bytes of 131072, total 3014656
sent 131072 bytes of 131072, total 3145728
sent 131072 bytes of 131072, total 3276800
sent 131072 bytes of 131072, total 3407872
sent 131072 bytes of 131072, total 3538944
sent 131072 bytes of 131072, total 3670016
sent 131072 bytes of 131072, total 3801088
sent 131072 bytes of 131072, total 3932160
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 4063232
sent 131072 bytes of 131072, total 4194304
sent 131072 bytes of 131072, total 4325376
sent 131072 bytes of 131072, total 4456448
sent 131072 bytes of 131072, total 4587520
sent 131072 bytes of 131072, total 4718592
sent 131072 bytes of 131072, total 4849664
sent 131072 bytes of 131072, total 4980736
sent 131072 bytes of 131072, total 5111808
sent 131072 bytes of 131072, total 5242880
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 5373952
sent 131072 bytes of 131072, total 5505024
sent 131072 bytes of 131072, total 5636096
sent 131072 bytes of 131072, total 5767168
sent 131072 bytes of 131072, total 5898240
sent 131072 bytes of 131072, total 6029312
sent 131072 bytes of 131072, total 6160384
sent 131072 bytes of 131072, total 6291456
sent 131072 bytes of 131072, total 6422528
sent 131072 bytes of 131072, total 6553600
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 6684672
sent 131072 bytes of 131072, total 6815744
sent 131072 bytes of 131072, total 6946816
C:\cygwin64\iperf-3.9\iperf-3.9\src>iperf3  -c 127.0.0.1 -d -k 1
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=
send_parameters:
{
        "tcp":  true,
        "omit": 0,
        "time": 0,
        "blockcount":   1,
        "parallel":     1,
        "len":  131072,
        "pacing_timer": 1000,
        "client_version":       "3.9"
}
Connecting to host 127.0.0.1, port 5201
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=
SNDBUF is 65536, expecting 0
RCVBUF is 65536, expecting 0
[  5] local 127.0.0.1 port 50950 connected to 127.0.0.1 port 5201
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 131072
sent 131072 bytes of 131072, total 262144
sent 131072 bytes of 131072, total 393216
sent 131072 bytes of 131072, total 524288
sent 131072 bytes of 131072, total 655360
sent 131072 bytes of 131072, total 786432
sent 131072 bytes of 131072, total 917504
sent 131072 bytes of 131072, total 1048576
sent 131072 bytes of 131072, total 1179648
sent 131072 bytes of 131072, total 1310720
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
send_results
{
        "cpu_util_total":       22.499550008999819,
        "cpu_util_user":        0,
        "cpu_util_system":      22.499550008999819,
        "sender_has_retransmits":       0,
        "streams":      [{
                        "id":   1,
                        "bytes":        1310720,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0,
                        "start_time":   0,
                        "end_time":     0.02644
                }]
}
get_results
{
        "cpu_util_total":       0.11810873806460452,
        "cpu_util_user":        0,
        "cpu_util_system":      0.11810873806460452,
        "sender_has_retransmits":       -1,
        "streams":      [{
                        "id":   1,
                        "bytes":        1310720,
                        "retransmits":  -1,
                        "jitter":       0,
                        "errors":       0,
                        "packets":      0,
                        "start_time":   0,
                        "end_time":     0.027793
                }]
}
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
interval_len 0.026440 bytes_transferred 1310720
interval forces keep
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-0.03   sec  1.25 MBytes   397 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-0.03   sec  1.25 MBytes   397 Mbits/sec                  sender
[  5]   0.00-0.03   sec  1.25 MBytes   377 Mbits/sec                  receiver

iperf Done.

and more output with SELECT_RESULT:

Connecting to host 127.0.0.1, port 5201
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=
SNDBUF is 65536, expecting 0
RCVBUF is 65536, expecting 0
[  5] local 127.0.0.1 port 50959 connected to 127.0.0.1 port 5201
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
SELECT RESULT=2, CNTL-FD=4, READ-FDs-SET=4;, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 131072
sent 131072 bytes of 131072, total 262144
sent 131072 bytes of 131072, total 393216
sent 131072 bytes of 131072, total 524288
sent 131072 bytes of 131072, total 655360
sent 131072 bytes of 131072, total 786432
sent 131072 bytes of 131072, total 917504
sent 131072 bytes of 131072, total 1048576
sent 131072 bytes of 131072, total 1179648
sent 131072 bytes of 131072, total 1310720
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 1441792
sent 131072 bytes of 131072, total 1572864
sent 131072 bytes of 131072, total 1703936
sent 131072 bytes of 131072, total 1835008
sent 131072 bytes of 131072, total 1966080
sent 131072 bytes of 131072, total 2097152
sent 131072 bytes of 131072, total 2228224
sent 131072 bytes of 131072, total 2359296
sent 131072 bytes of 131072, total 2490368
sent 131072 bytes of 131072, total 2621440
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 2752512
sent 131072 bytes of 131072, total 2883584
sent 131072 bytes of 131072, total 3014656
sent 131072 bytes of 131072, total 3145728
sent 131072 bytes of 131072, total 3276800
sent 131072 bytes of 131072, total 3407872
sent 131072 bytes of 131072, total 3538944
sent 131072 bytes of 131072, total 3670016
sent 131072 bytes of 131072, total 3801088
sent 131072 bytes of 131072, total 3932160
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 4063232
sent 131072 bytes of 131072, total 4194304
sent 131072 bytes of 131072, total 4325376
sent 131072 bytes of 131072, total 4456448
sent 131072 bytes of 131072, total 4587520
sent 131072 bytes of 131072, total 4718592
sent 131072 bytes of 131072, total 4849664
sent 131072 bytes of 131072, total 4980736
sent 131072 bytes of 131072, total 5111808
sent 131072 bytes of 131072, total 5242880
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 5373952
sent 131072 bytes of 131072, total 5505024
sent 131072 bytes of 131072, total 5636096
sent 131072 bytes of 131072, total 5767168
sent 131072 bytes of 131072, total 5898240
sent 131072 bytes of 131072, total 6029312
sent 131072 bytes of 131072, total 6160384
sent 131072 bytes of 131072, total 6291456
sent 131072 bytes of 131072, total 6422528
sent 131072 bytes of 131072, total 6553600
SELECT RESULT=1, CNTL-FD=4, READ-FDs-SET=, WRITE-FDs-SET=5;
sent 131072 bytes of 131072, total 6684672
sent 131072 bytes of 131072, total 6815744
sent 131072 bytes of 131072, total 6946816

As I mentioned earlier, I did tests to localhost and I noticed that the RAM is not growing. image Or maybe I should do a test to another machine (not localhost) on the local network or is such a test sufficient?

davidBar-On commented 4 years ago

@olekstomek,

Or maybe I should do a test to another machine (not localhost) on the local network or is such a test sufficient?

Test should be done to another machine. The reason is that the interface between two processes on the same machine is very fast, basically just writing to and reading from the machines internal memory, which based on the previous runs is some Gbps. This is probably why memory usage in not growing - every packet sent by the client is immediately consumed by the server and there is no need for network buffering.

When sending to another machine, the bandwidth is much lower (25Mbps over WiFi and 100Mbps over Ethernet), and therefore network buffering is needed (writing Gbps for sending but sending 100Mbps from it).

Can you run the tests again with sending to another machine? Note that I asked the run with -k 1 just for getting the header of the debug information. Also, for the other run wait several seconds before copying the debug output, as initially the speed is always very high, until network buffers are full. We need to see the behavior after the network buffers where supposed to be full.

olekstomek commented 4 years ago

@davidBar-On I did tests between two differents computers in my local network. I'm adding full logs so you can see everything. I think these data are the best because the partial logs that I choose could have an impact on the conclusion. First I run iperf with Nwrite and then with send. Nwrite.txt send.txt server_side.txt

davidBar-On commented 4 years ago

@olekstomek, unfortunately I don't see in the latest output anything that can help with understanding the problem. It just confirms that there is a problem related to the TCP buffer / cache. In normal situation once the transmit network buffers are full, send or write should return that 0 bytes were sent. I.e. we should have seen many sent 0 bytes of 131072 messages. However all the sends successfully write all packet. (The slower rate of sending in these tests is because of printing the debug messages.)

Hardware: Dell, CPU Intel i5-10310U @ 1.70GHz 16GB RAM, SSD disk and Dell CPU Intel i5-5300U @ 2.30GHz 16GB RAM, SSD disk

If I understand correctly, the problem happens only on one of these computers. Which of them show the problem when the client runs on it?

Also, please send the output of the following shell command on both computers: Get-NetTCPSetting

olekstomek commented 4 years ago

@davidBar-On

Hardware: Dell, CPU Intel i5-10310U @ 1.70GHz 16GB RAM, SSD disk and Dell CPU Intel i5-5300U @ 2.30GHz 16GB RAM, SSD disk

If I understand correctly, the problem happens only on one of these computers. Which of them show the problem when the client runs on it?

The problem occurs on both computers. The first computer was the client and server and the second computer was also the client and server. Note that the upload results were ~ 5Gbps at one time and 18Gbps at one time - this was the result of the computer I use as a client. In the case of ~18Gbps, it was on a computer Dell, CPU Intel i5-10310U @ 1.70GHz 16GB RAM DDR4, SSD disk, in the case of ~5Gbps it was on computer Dell CPU Intel i5-5300U @ 2.30GHz 16GB RAM DDR3, SSD disk.

Also, please send the output of the following shell command on both computers: Get-NetTCPSetting

For Dell CPU Intel i5-10310U result is here: https://github.com/esnet/iperf/issues/1069#issuecomment-727557023 For Dell CPU Intel i5-5300U:

PS C:\WINDOWS\system32> GET-NetTCPSetting

SettingName                     : Automatic
MinRto(ms)                      :
InitialCongestionWindow(MSS)    :
CongestionProvider              :
CwndRestart                     :
DelayedAckTimeout(ms)           :
DelayedAckFrequency             :
MemoryPressureProtection        :
AutoTuningLevelLocal            :
AutoTuningLevelGroupPolicy      :
AutoTuningLevelEffective        :
EcnCapability                   :
Timestamps                      :
InitialRto(ms)                  :
ScalingHeuristics               :
DynamicPortRangeStartPort       :
DynamicPortRangeNumberOfPorts   :
AutomaticUseCustom              :
NonSackRttResiliency            :
ForceWS                         :
MaxSynRetransmissions           :
AutoReusePortRangeStartPort     :
AutoReusePortRangeNumberOfPorts :

SettingName                     : InternetCustom
MinRto(ms)                      : 300
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 40
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : DatacenterCustom
MinRto(ms)                      : 20
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 10
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : Compat
MinRto(ms)                      : 300
InitialCongestionWindow(MSS)    : 4
CongestionProvider              : NewReno
CwndRestart                     : False
DelayedAckTimeout(ms)           : 200
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : Datacenter
MinRto(ms)                      : 20
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 10
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

SettingName                     : Internet
MinRto(ms)                      : 300
InitialCongestionWindow(MSS)    : 10
CongestionProvider              : CUBIC
CwndRestart                     : False
DelayedAckTimeout(ms)           : 40
DelayedAckFrequency             : 2
MemoryPressureProtection        : Disabled
AutoTuningLevelLocal            : Normal
AutoTuningLevelGroupPolicy      : NotConfigured
AutoTuningLevelEffective        : Local
EcnCapability                   : Disabled
Timestamps                      : Disabled
InitialRto(ms)                  : 1000
ScalingHeuristics               : Disabled
DynamicPortRangeStartPort       : 49152
DynamicPortRangeNumberOfPorts   : 16384
AutomaticUseCustom              : Disabled
NonSackRttResiliency            : Disabled
ForceWS                         : Enabled
MaxSynRetransmissions           : 4
AutoReusePortRangeStartPort     : 0
AutoReusePortRangeNumberOfPorts : 0

It's exactly the same (I copied the result of GET-NetTCPSetting and used ctrl + f in this thread) I did the last tests in such a configuration that: On Dell CPU Intel i5-5300U I was running a compiled version. On Dell CPU Intel i5-10310U I started the server. The problem is on two machines so it seems that it does not matter where the server is and where the client is, because the upload is always overstated.

So it seems that it remains to wait, maybe someone else will come with a similar problem or we accidentally understand the problem on the occasion of another problem.

davidBar-On commented 4 years ago

Also, please send the output of the following shell command on both computers: Get-NetTCPSetting

For Dell CPU Intel i5-10310U result is here: #1069 (comment)

I see that I start repeating myself ... The last test I thought of that you may want to try is disable the auto tuning of the window size, in case the TCP stack does not handle this parameter correctly. Under shell run the following commend and then run iperf3: netsh interface tcp set global autotuninglevel=disabled.

So it seems that it remains to wait, maybe someone else will come with a similar problem or we accidentally understand the problem on the occasion of another problem.

I agree (except maybe for the above test). You may also try to find a Windows forum and ask what may be the reason that when sending over TCP unlimited amount of memory is allocated for the TCP buffers or network transmit buffers.

In any case, thanks for willing to put the effort to evaluate the issue.

olekstomek commented 4 years ago

@davidBar-On

The last test I thought of that you may want to try is disable the auto tuning of the window size, in case the TCP stack does not handle this parameter correctly. Under shell run the following commend and then run iperf3: netsh interface tcp set global autotuninglevel=disabled.

I did it and problem is the same. Additionally I restarted my OS and still the same.

I also did a test on one of the computers where the upload is incorrect - I ran Ubuntu 20.04 from pendrive and installed iPerf 3.1.3 and the results are correct, below 100Mbps (but I noticed that the server had a different value and the transfer value was different on the Linux client in in each line, e.g. in the first line, I see that the client is sending data at a speed of e.g. 95Mbps and the server on Windows shows at this point that the transfer is e.g. 93.5Mbps)... But it's not unreal 5Gbps... I found some information about this here.

The problem does not only occur with iPerf but also, for example, with a desktop application from here (result of upload significantly increase to several Gbps).

Additionally, I could suspect that the computer is in the domain and has some special network policies imposed from above. Still, that doesn't mean I should be seeing incorrect results. Anyway, local tests I performed tests between computers in a local network without internet access.

You may also try to find a Windows forum and ask what may be the reason that when sending over TCP unlimited amount of memory is allocated for the TCP buffers or network transmit buffers.

I started a quick research on how to manipulate the speedtest results or cheat the test results - it's quite difficult because mostly ISP threads where they can prioritize traffic. But I found out that the problem does not only occur with Windows, but something similar also with Linux. The case is on Docker https://github.com/nerdalert/iperf3/issues/2 but if I understand the problem is in image (image which works correct vs image which works incorrect).

In any case, thanks for willing to put the effort to evaluate the issue.

I would also like to thank you for your time and suggestions.

jfinnie commented 3 months ago

I'm also seeing similarly implausible results if I don't limit the TCP upload bitrate; UDP seems fine. @olekstomek did you ever manage to work out what was going on here?

C:\Users\james\Downloads\iperf3.17_64\iperf3.17_64>iperf3.exe -c 192.168.20.169
Connecting to host 192.168.20.169, port 5201
[  5] local 192.168.20.107 port 1169 connected to 192.168.20.169 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.01   sec  6.34 GBytes  53.8 Gbits/sec
[  5]   1.01-2.00   sec  4.31 GBytes  37.4 Gbits/sec
[  5]   2.00-3.01   sec  4.71 GBytes  40.3 Gbits/sec
[  5]   3.01-4.01   sec  4.46 GBytes  38.1 Gbits/sec
[  5]   4.01-5.00   sec  4.07 GBytes  35.3 Gbits/sec
[  5]   5.00-6.01   sec  4.12 GBytes  35.2 Gbits/sec
[  5]   6.01-7.01   sec  4.00 GBytes  34.2 Gbits/sec
[  5]   7.01-8.00   sec  3.56 GBytes  30.9 Gbits/sec
[  5]   8.00-9.01   sec  3.47 GBytes  29.7 Gbits/sec
[  5]   9.01-10.01  sec  3.36 GBytes  28.7 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.01  sec  42.4 GBytes  36.4 Gbits/sec                  sender
[  5]   0.00-10.01  sec   966 MBytes   809 Mbits/sec                  receiver

iperf Done.
olekstomek commented 3 months ago

@jfinnie I haven't been able to determine what the cause was despite various attempts to find out (comments in this thread). I currently no longer have access to the equipment on which the described problem occurred. If you find something interesting, it would be great if you wrote something more.

davidBar-On commented 3 months ago

@jfinnie, there is probably no point in repeating the extensive tests done by @olekstomek, but the following may help to understand whether it is the same problem:

  1. What is the HW / OS version you are using?
  2. What is the network architecture between the client and server computers? Direct cable connection? Over WiFi? Any routers/switches between them? etc.
  3. iperf3 -v to understand the iperf3 version, under which OS is was built, etc.
  4. Run the test for much longer time, e.g. 300 seconds (-t 300). The behavior over the long term may give a better understanding of whether it is a temporary issue or other.
  5. Try using UDP test with higher bitrate than the received bitrate, e.g. 2Gbps (-u -b 2G -t 300).