bytedance / libtpa

Libtpa(Transport Protocol Acceleration), a DPDK based userspace TCP stack implementation.
https://bytedance.github.io/libtpa/
BSD 3-Clause "New" or "Revised" License
86 stars 8 forks source link

About bonding for nics #10

Closed johndb2016 closed 1 month ago

johndb2016 commented 1 month ago

We use bonding on servers widely, so does Libtpa support bonding? how to make it work, an example will be appreciated, thx..

yuanhanliu commented 1 month ago

If you run the application with 'tpa run' wrapper, nothing extra is needed: the wrapper will setup all the necessary configs for you. Otherwise, you have to write the right config file by yourself. A simple run like 'tpa run :' will dump the configs to the terminal.

And please note that Libtpa only supports bonding mode 4 and 2 ports at most.

johndb2016 commented 1 month ago

Mellanox dual port card(eth0 and eth1), with bonding mod 4, it will enter roce lag mode by default, however i find libtpa only can use eth0 with roce lag mode, like this: image how can i fix this, is that a bug with dpdk-20.11.3

yuanhanliu commented 1 month ago

It's not a bug.

The 'tpa run' wrapper will add two DPDK arguments (-a xxx -a yyy) in bonding mode. Then the mlnx DPDK driver will do two probes. Doing so will make it work no matter RoCE lag is enabled or not. If it's not enabled, the two probes would succeed and there will be two DPDK ports. If RoCE lag is enabled, one probe would fail, which is expected, and there will be one DPDK port in the end.