Open irishj opened 4 years ago
eth4 Link encap:Ethernet HWaddr 24:5E:BE:XX:XX:XX
inet addr:192.168.0.42 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::265e:beff:fe42:e18b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Please try Enabling Jumbo-frame. Jumbo-frame may increase performance, especially with low-performance CPUs.
Also, have you encounter disconnection of the network under the QNAUC5G1T during the test? Some user reports this disconnection issue with DS1815+.
I'd prefer not to enable JF, as I'd need to enable it on all my network devices. Speed is fine on Windows 10, 370Mb/s max transfer from another host via SMB. I tried the same thing on a DS412+ and have the same speed issue as on the DS1815+ (transfer speeds are identical)
Disconnection issues, yes I've experienced them. Works fine for iPerf testing, even with multiple streams, but put some SMB traffic through and the transfer will freeze, then the connection is lost. Driver still running after this occurs and if you stop and restart the driver, the connection comes back.
I'd prefer not to enable JF, as I'd need to enable it on all my network devices.
Indeed, switch to Jumbo-Frame is painful work when there are many devices. But it can isolate problems whether CPU bottle-neck or others.
Generally, creating transactions with USB devices is heavy work to CPU and using Jumbo-Frame helps reducing a number of transactions. In my environment(DS918+), it was observed that enabling Jumbo-Frame increases throughput and lowering CPU load.
Furthermore, if the disconnection issue has not occurred in the Iperf test, the issue might be also caused by CPU load. So I much appreciate it if you enable Jumbo-Frame temporally among limited devices to test.
Lastly, changing the SMB protocol level may affect throughput because encryption mode is determined from this configuration. https://www.synology.com/en-us/knowledgebase/DSM/help/DSM/AdminCenter/file_winmacnfs_win
Not sure if this is related, i just connected one of the trend adapters (using a 10gbps C > A adapter) to my DS1815+ and used HTML speed test docker container. This does full 1gbps in both directions on regular LAN.
With the adapter i get 2gbps down (which i consider good given the avoton platform) however up is only 500mbps -which is odd as this isn't writing to disk, its purely a memory operation.
What could cause this disparity?
Jumbo frame fixed this for me, i needed to set on:
What i don't understand is why this makes a diff on a html speed test.!?
Probably this difference is caused by the reason in the past comment and some other conditions. (Eg, write buffer size of the ethernet adaptor, overheads of creating a USB transaction, etc...)
Generally, creating transactions with USB devices is heavy work to CPU and using Jumbo-Frame helps reducing a number of transactions. In my environment(DS918+), it was observed that enabling Jumbo-Frame increases throughput and lowering CPU load.
Hi guys, I'm also using this driver on a DS1815+. Thanks very much @bb-qq for the driver. I thought a couple of the QNAP devices ( qna-uc5g1t ) might breath new life into it since it has no PCIe slot. Anyway when I connected the two dongles to Windows 10 machines I actually see 400MB/s. When I connect one dongle to my DS1815+ and the other end to one of the Win 10 machines and I copy something from the NAS to the Wind 10 machine I am only seeing 45 - 50 MB/s. If I go over my LAN to the NAS I am getting 115MB/s. I also enabled Jumbo Frames (9K) on both the DS1815+ and the Windows 10 machine and the most I see is about 60MB/s.
To be honest that is not even the worst part for me. I got the QNA dongles to increase the bandwidth between my DS1815+ and my ESXi hosts for NFS datastores. I can connect to the DS1815+ from the ESXi host for about 1 minutes before the DS1815+ shutsdown the connection. The lights even go off in the dongle connected to the DS1815+ and its says in DSM -> Network that the cable has been disconnected. The only way to fix this is to go to DSM > Packages and stop/run your package again. Is this the only way to reset the connection?
To verify the QNAP devices were working I connected the Window 10 machine to the ESXi host via the QNAP devices and they worked fine. So I am not sure why that is happening. @bb-qq are there logs on the DS1815+ that I could look at to see if there were errors. DMESG just tells me things are being killed...
[12543.401148] usb 2-2: ep 0x81 - rounding interval to 64 microframes, ep desc says 80 microframes
[12670.525781] init: dhcp-client (eth4) main process (32653) killed by TERM signal
[12670.866393] init: winbindd main process (1823) killed by TERM signal
[12676.848887] aqc111 3-4:1.0 eth4: Link Speed 5000, USB 3
[12678.316922] init: dhcp-client (eth4) main process (7532) killed by TERM signal
[12678.589789] init: winbindd main process (7775) killed by TERM signal
[12681.254427] init: winbindd main process (9207) killed by TERM signal
[12686.887692] init: iscsi_pluginserverd main process ended, respawning
[12688.180064] iSCSI:iscsi_target.c:612:iscsit_del_np CORE[0] - Removed Network Portal: 169.254.135.247:3260 on iSCSI/TCP
[12688.193378] iSCSI:iscsi_target.c:520:iscsit_add_np CORE[0] - Added Network Portal: 192.168.137.95:3260 on iSCSI/TCP
[12688.259687] init: iscsi_pluginserverd main process (10880) killed by TERM signal
[12690.230024] init: iscsi_pluginserverd main process (11133) killed by TERM signal
[12690.248030] init: iscsi_pluginengined main process (11126) killed by TERM signal
[12690.838935] init: iscsi_pluginserverd main process (11351) killed by TERM signal
[12690.855612] init: iscsi_pluginengined main process (11336) killed by TERM signal
[12694.017491] init: iscsi_pluginserverd main process (11434) killed by TERM signal
[12694.040528] init: iscsi_pluginengined main process (11419) killed by TERM signal
[12695.976354] init: iscsi_pluginserverd main process (11787) killed by TERM signal
[12695.989116] init: iscsi_pluginengined main process (11775) killed by TERM signal
[12698.887614] usb 2-2: ep 0x81 - rounding interval to 64 microframes, ep desc says 80 microframes
[12739.856374] init: dhcp-client (eth4) main process (8621) killed by TERM signal
[12740.430920] init: winbindd main process (11356) killed by TERM signal
[12750.050630] iSCSI:iscsi_target.c:612:iscsit_del_np CORE[0] - Removed Network Portal: 192.168.137.95:3260 on iSCSI/TCP
[12750.062860] iSCSI:iscsi_target.c:612:iscsit_del_np CORE[0] - Removed Network Portal: [fe80::265e:beff:fe4d:a71e]:3260 on iSCSI/TCP
Just wanted to add some iperf3 stats. When the DS1815+ is the server I get weird results. When the DS1815+ is the client things seem better. Below is the results from the view my Window 10 machine. It was the server first and then it was the client connecting to the DS1815+ (192.168.22.1)
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.22.1, port 51882
[ 5] local 192.168.22.20 port 5201 connected to 192.168.22.1 port 51883
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 234 MBytes 1.96 Gbits/sec
[ 5] 1.00-2.00 sec 242 MBytes 2.03 Gbits/sec
[ 5] 2.00-3.00 sec 244 MBytes 2.05 Gbits/sec
[ 5] 3.00-4.00 sec 245 MBytes 2.05 Gbits/sec
[ 5] 4.00-5.00 sec 243 MBytes 2.04 Gbits/sec
[ 5] 5.00-6.00 sec 244 MBytes 2.05 Gbits/sec
[ 5] 6.00-7.00 sec 243 MBytes 2.04 Gbits/sec
[ 5] 7.00-8.00 sec 243 MBytes 2.04 Gbits/sec
[ 5] 8.00-9.00 sec 243 MBytes 2.04 Gbits/sec
[ 5] 9.00-10.00 sec 245 MBytes 2.06 Gbits/sec
[ 5] 10.00-10.04 sec 8.59 MBytes 2.01 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.04 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-10.04 sec 2.38 GBytes 2.04 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
iperf3: interrupt - the server has terminated
PS C:\Users\Brimur\Downloads\iperf-3.1.3-win64\iperf-3.1.3-win64> .\iperf3.exe -c 192.168.22.1
Connecting to host 192.168.22.1, port 5201
[ 4] local 192.168.22.20 port 51885 connected to 192.168.22.1 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 256 KBytes 2.09 Mbits/sec
[ 4] 1.00-2.00 sec 0.00 Bytes 0.00 bits/sec
[ 4] 2.00-3.00 sec 0.00 Bytes 0.00 bits/sec
[ 4] 3.00-4.00 sec 0.00 Bytes 0.00 bits/sec
[ 4] 4.00-5.00 sec 128 KBytes 1.05 Mbits/sec
[ 4] 5.00-6.00 sec 0.00 Bytes 0.00 bits/sec
[ 4] 6.00-7.00 sec 0.00 Bytes 0.00 bits/sec
[ 4] 7.00-8.00 sec 0.00 Bytes 0.00 bits/sec
[ 4] 8.00-9.00 sec 0.00 Bytes 0.00 bits/sec
[ 4] 9.00-10.00 sec 128 KBytes 1.05 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 512 KBytes 419 Kbits/sec sender
[ 4] 0.00-10.00 sec 268 KBytes 220 Kbits/sec receiver
iperf Done.
you should be using iperf 3.7 x64.
v3.1.1 has known issues on windows version for high speed networks (you will need to google it , it is not on the regular iperf site for some reason)
also you did set jumbo frame / MTU 9000 on the twop PCs, the synology and all intermediate switches right?
PS get 2gb win10 <> win10 first (i.e validate the adapter, cables and MTUs are all right).
Thanks, I mentioned in my post above that I tested with 2 x Win10 machines and they had solid connections and I was able to transfer ~400MBs. When I connect one of those machines to my DS1815+ the issues start. The link on both sides says 5Gbps and 9000 MTU but speeds are maxing at 60MBps. (115MBps over normal 1Gbps home network)
I installed the HTML5 speed test and ran some speed tests. The picture below compares my LAN connection to the DS1815+ with the AQC11 connection. The HTML5 speedtests show ~1Gbit for the LAN connection as expected and 2GBps - 2.4GBps for the AQC11. Moving to file transfer I am seeing ~100MBps over 1Gbps LAN but only ~60 - 70MBps over the 2-2.5GBps...
All of this is from the same machine, just using different routes,
was the file copy a single file or many smaller ones? In the bond above was it many small files - if it was i am not sure if SMB actually uses both sides of the LAG / Bond these days?
It was a single 8GB ISO file used for both file copies. The source Windows 10 machine only has a single 1Gb NIC anyway so would not benefit from the bond on the NAS but also the bond wouldn't affect a single stream either. On the plus side, I installed NFS client services on the Windows 10 machine and I was able to get the file copy speed on the AQC11 up to 120MBps so NFS is at least working better than SMB/CIFS but still far away from 3Gbps or even 2Gbps
I am pretty much out of ideas. Sounds like you are doing every thing right. I will try and repro a file copy. One last thought, old versions of SMB are pretty chatty, have you tried forcing SMB3 on the synology?
I never bothered with this test because i know i would be limited by the disk speed, i.e no way to hit that upper limit on a read from a single disk on a PC not to metion the write speed on the DS. But i will give it ago at the weekend.
Thanks for all the suggestions. I had file services set to SMB3 and SMB2 with Large MTU. As mentioned it's fine when I go over my local network, the 1Gb link is maxed out. It's when I use the 5Gb link that I see the slow down. I even swapped the cables in case it was a dodgy cable but no difference.
I have had more luck on my 1815+ using the realtek based 2.5GBE NICS. over 200MB/sec read and write, you might want to try those. but honestly multichannel SMB is more stable and even faster, so after testing I don't use it any more.
I've actually started having what may be the same issue recently on a Fedora 32 client. Was fine on Fedora 31 but now I'm getting random freezes but nothing in syslog. It seems to be worse on one USB 3.0 controller than the other (my motherboard has a second controller for two extra ports). Even when it works, it takes so long to come up on boot that my NFS mounts fail.
Its worth noting I'm already using Jumbo Frames of 6000 as that performs faster than 9000 in my testing. You also don't have to use Jumbo Frames across the whole LAN, I only set it on the NAS which has 10Gbit and this client, this does not impact other clients (if it did, the Internet wouldn't ever work on a Jumbo Frames LAN) thanks to MTU path discovery.
Its also worth noting I upgraded from a Realtek 2.5Gbit as that has its own issues, spamming syslog with up/down messages as the Linux drivers don't officially support it. I'm tempted to just go back to the on-board Gigabit at this point as it seems the Linux drivers for these USB NICs suck.
The internet has routers that do packet segmentation. You don’t have that within a lan. Things get messy with inconsistent packet size on a lan and the common advice is to avoid it or you get hard to diagnose issues.on the other hand A USB NIC on a low powered atom CPU is kinda the perfect place to use jumbo frames if you can do it right.
@brimur - did you get anywhere with this?
I’m getting very similar issues: https://github.com/bb-qq/aqc111/issues/43
@brimur - did you get anywhere with this?
I’m getting very similar issues: #43
No I gave up on it, yours is a newer synology so you might hand better luck but the link on my ds1815 just kept dying after a few minutes at 5Gb. If I set it to 2.5 it stays alive a bit longer and at 1gb it seems to be stable but that's no use to me. Also this driver cannot be installed on the new DSM 7
Also this driver cannot be installed on the new DSM 7
Why is that? I read they were removing some drivers, but surprised if missing drivers can’t be compiled and installed just like this one?
That said, could be a moot point as I can’t get this anywhere near working.
Also this driver cannot be installed on the new DSM 7
Why is that? I read they were removing some drivers, but surprised if missing drivers can’t be compiled and installed just like this one?
That said, could be a moot point as I can’t get this anywhere near working.
They are locking down 3rd party low level access. You might still be able to install it via ssh but it really was too unstable for me to put in any more effort.
Description of the problem
Trying this driver on a DS1815+(avoton) and the driver works fine (installs, new connecton is shown and IP is retrieved via DHCP) and I can access the host and ping it without issue.
Read speeds are great, 270Mb/s when copying a file via SMB from the diskstation to another network host.
The issue I'm having is that writes are really slow, 60Mb/s - 72Mb/s max. Data is being written onto a SHR volume (btrfs). Writes with onboard nics max out at 113Mb/s. Tried from two different network clients, both running 10G ethernet.
Description of your products
Description of your environment
Output of
dmesg
commandOutput of
lsusb
commandOutput of
ifconfig -a
commandIf you have any suggestions on possible causes here it would be appreciated. I'm available if you need any further information or testing to be conducted.
Thanks !