Closed ydzhus closed 7 months ago
The result started with 231 megabytes per second and then gradually fell and rose 180-190-200, later it began to fall to 128 megabytes per second. Curve in the direction of fall.
This symptom suggests that the disk read speed may not be keeping up with the copy process. (Even if you are using the SSD cache, this will have no effect if it is not on the cache.)
In order to isolate the problem, I need you to post the iperf results. Also, what happens if you run iperf from those two laptops at the same time?
The result started with 231 megabytes per second and then gradually fell and rose 180-190-200, later it began to fall to 128 megabytes per second. Curve in the direction of fall.
This symptom suggests that the disk read speed may not be keeping up with the copy process. (Even if you are using the SSD cache, this will have no effect if it is not on the cache.)
In order to isolate the problem, I need you to post the iperf results. Also, what happens if you run iperf from those two laptops at the same time?
Is it better to connect two clients via a cable or is it possible via wifi 5 and 6? I have only 1 2.5 G port in my router to which the NAS is connected. Cable clients will only be able to connect to 1G ports.
Is there any specific iperf test command that can be run on 2 clients? To see the maximum throughput. Since different commands have different speeds, it would be possible to standardize here somehow.
Could all this be due to the fact that the disks can not cope? I have two WD40EFRX-68N32N0 in SHR mirror.
Does it make sense to disable SSD cache for separate testing with iperf ? Although I would like to use SSD cache and have a conditional speed of 2.5G.
Is it better to connect two clients via a cable or is it possible via wifi 5 and 6? I have only 1 2.5 G port in my router to which the NAS is connected. Cable clients will only be able to connect to 1G ports.
Even if each port has a maximum speed of 1 Gbps, the total will be 2 Gbps, so I think it makes sense to check if this speed is kept.
Wireless connections are not particularly suitable for testing with multiple clients because of the uncertainties and interference between clients on the airwaves.
Is there any specific iperf test command that can be run on 2 clients? To see the maximum throughput. Since different commands have different speeds, it would be possible to standardize here somehow.
You can launch multiple instances of iperf3 by specifying a different port number for -p 5201
.
Could all this be due to the fact that the disks can not cope? I have two WD40EFRX-68N32N0 in SHR mirror.
I would like to use iperf to isolate it.
Does it make sense to disable SSD cache for separate testing with iperf ? Although I would like to use SSD cache and have a conditional speed of 2.5G.
The SSD cache should not slow down the transfer rate. To keep ideal transfer speeds, the entire file to be copied must be cached on the SSD.
The result started with 231 megabytes per second and then gradually fell and rose 180-190-200, later it began to fall to 128 megabytes per second. Curve in the direction of fall.
This symptom suggests that the disk read speed may not be keeping up with the copy process. (Even if you are using the SSD cache, this will have no effect if it is not on the cache.)
In order to isolate the problem, I need you to post the iperf results. Also, what happens if you run iperf from those two laptops at the same time?
Two clients are connected by 1G cable to the router. In total, the server showed 232.5 megabytes per second, stable.
iperf3 -c 192.168.1.118 -p 5202 -t 60 -i 5 -P 4 -w 128k
iperf3 -c 192.168.1.118 -p 5201 -t 60 -i 5 -P 4 -w 128k
iperf3 -c 192.168.1.118 -p 5202
iperf3 -c 192.168.1.118 -p 5201
Since simultaneous transfer rates of nearly 2 Gbps are achieved using an Ethernet cable, there appears to be no problem with the WL-NWU330GCA or the driver.
So to address the original issue, it would be a good idea to check the following.
Description of the problem