jpirko / libteam

team netdevice library
GNU Lesser General Public License v2.1
231 stars 59 forks source link

Performance regression on migration from bonding driver to team #56

Open CaptainSifff opened 4 years ago

CaptainSifff commented 4 years ago

Hi all, On Debian Bullseye, I have the following bonding configuration for 4 GBit ports that are connected to an LACP configured switch:

iface bond0 inet dhcp
    slaves eno1 eno2 eno3 eno4
    bond-mode 4
    bond-miimon 100
    bond-lacp-rate 1
    bond_xmit_hash_policy layer3+4
    bond-updelay 200

With iperf I measure that this configuration is able to yield 4GBit of throughput and iptraf shows me that all 4 ports are similarly utilized. Using bond2team I have obtained the following team configuration:

{
        "device":               "team0",
        "runner": {
                "name": "lacp",
                "fast_rate": true,
                "tx_hash": ["l3", "l4"]
        },
        "link_watch":           {"name": "ethtool", "delay_up" : 200},
        "ports":                {"eno1": {}, "eno2": {}, "eno3": {}, "eno4": {}}
}

I only obtain 2.2 GBit of throughput and only two utilized ports. These ports change on reruns, but only two are utilized. Adding a tx_balancer option does not change the picture. Any hints on what went wrong?

Version info: modinfo team: filename: /lib/modules/5.9.0-1-amd64/kernel/drivers/net/team/team.ko teamd -v: teamd 1.31