polarfire-soc / meta-polarfire-soc-yocto-bsp

PolarFire SoC yocto Board Support Package
Other
47 stars 38 forks source link

MTU 9000 is not supported in 2023.09 for eth1 #52

Open kolabit opened 9 months ago

kolabit commented 9 months ago

I need to have 9K MTU size for my solution, and it used to work in earlier versions.

For example in 2023.02:

root@icicle-kit-es:~# ip link set eth0 down
root@icicle-kit-es:~# ip link set eth0 mtu 9000
root@icicle-kit-es:~# ip link set eth0 up      
root@icicle-kit-es:~# uname -r
5.15.92-linux4microchip+fpga-2023.02.1

However, in 2023.06 and latest 2023.09 4K is the maximum. For examples in 2023.09:

root@icicle-kit-es:~# ip link set eth1 down
[  436.945883] macb 20112000.ethernet eth1: Link is Down
[  436.968356] macb 20112000.ethernet: gem-ptp-timer ptp clock unregistered.
root@icicle-kit-es:~# ip link set eth1 mtu 9000
Error: mtu greater than device maximum.
root@icicle-kit-es:~# ip link set eth1 mtu 4022
root@icicle-kit-es:~# ip link set eth1 up
[  456.955921] macb 20112000.ethernet eth1: PHY [20112000.ethernet-ffffffff:09] driver [Generic PHY] (irq=POLL)
[  456.965977] macb 20112000.ethernet eth1: configuring for phy/sgmii link mode
[  456.976752] pps pps0: new PPS source ptp0
[  456.981262] macb 20112000.ethernet: gem-ptp-timer ptp clock registered.
vfalanis commented 9 months ago

Hello @kolabit,

There was a fix introduced in 2023.02.1 Yocto release that included a patch in the MACB Cadence Linux driver to fix an issue that was causing the ethernet connection to be lost periodically. This issue is described in Cadence erratum ETH-1686.

The fix involves adjusting the mac jumbo frame, which impacts the maximum mtu value that can be used without causing AMBA errors.

Hope this helps

kolabit commented 9 months ago

hi @vfalanis Is it possible to get more info about Cadence erratum ETH-1686? Do you think it is possible to work it around if we need 9K only in one direction (TX)?

vfalanis commented 9 months ago

Hi @kolabit ,

The maximum TX lenght for PolarFire SoC is roughly 4K (4KiB - 56 to be exact). This means that the maximum MTU size that can be set is 4022. There is no workaround to increase the MTU since increasing the length is known to cause AMBA errors.

kolabit commented 9 months ago

Hi As I see, 4022 is not working either If I have MTU 4022, and run iperf3 UDP test, the connection will hang after the first batch:

root@icicle-kit-es:~# iperf3 -c 10.22.33.1 -f K -u
Connecting to host 10.22.33.1, port 5201
[  5] local 10.22.33.44 port 38245 connected to 10.22.33.1 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec  50.4 KBytes  50.4 KBytes/sec  13  
[  5]   1.00-2.00   sec  0.00 Bytes  0.00 KBytes/sec  0  
[  5]   2.00-3.00   sec  0.00 Bytes  0.00 KBytes/sec  0  

Then, 10.22.33.44 will not be pingable from the server side.

If I set MTU=3700 it looks better:

root@icicle-kit-es:~# ip link show eth1         
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 3700 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 00:04:a3:33:6a:ce brd ff:ff:ff:ff:ff:ff

root@icicle-kit-es:~# iperf3 -c 10.22.33.1 -f K -u
Connecting to host 10.22.33.1, port 5201
[  5] local 10.22.33.44 port 57519 connected to 10.22.33.1 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec   128 KBytes   128 KBytes/sec  36  
[  5]   1.00-2.00   sec   128 KBytes   128 KBytes/sec  36  
[  5]   2.00-3.00   sec   128 KBytes   128 KBytes/sec  36  
[  5]   3.00-4.00   sec   128 KBytes   128 KBytes/sec  36  
[  5]   4.00-5.00   sec   128 KBytes   128 KBytes/sec  36  
[  5]   5.00-6.00   sec   128 KBytes   128 KBytes/sec  36  
[  5]   6.00-7.00   sec   128 KBytes   128 KBytes/sec  36  
[  5]   7.00-8.00   sec   128 KBytes   128 KBytes/sec  36  
[  5]   8.00-9.00   sec   128 KBytes   128 KBytes/sec  36  
[  5]   9.00-10.00  sec   128 KBytes   128 KBytes/sec  36  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  1.25 MBytes   128 KBytes/sec  0.000 ms  0/360 (0%)  sender
[  5]   0.00-10.04  sec  1.25 MBytes   128 KBytes/sec  0.032 ms  0/360 (0%)  receiver

iperf Done.

UDP speed is horribly low, but TCP speed is OK. I will create new issue for this.