CloudNativeDataPlane / cndp

Cloud Native Data Plane (CNDP) is a collection of user space libraries to accelerate packet processing for cloud applications using AF_XDP sockets as the primary I/O..
BSD 3-Clause "New" or "Revised" License
90 stars 32 forks source link

when cndp can support dev bond feature? #239

Open nickcreate2021 opened 1 year ago

nickcreate2021 commented 1 year ago

when cndp can support dev bond feature? for example dpdk bond poll mode. http://doc.dpdk.org/guides/prog_guide/link_bonding_poll_mode_drv_lib.html

KeithWiles commented 1 year ago

We had not planned on adding the bonding driver to CNDP it will take some effort to port the DPDK PMD. What is the use case and reason for needing the bonding driver?

nickcreate2021 commented 1 year ago

We had not planned on adding the bonding driver to CNDP it will take some effort to port the DPDK PMD. What is the use case and reason for needing the bonding driver?

In a commercial networking environment, for reliability and performance, the network card of the server generally requires a bond, usually a primary and standby or LACP bond, and the application needs to adapt to such a networking environment。

maryamtahhan commented 1 year ago

Hi, CNDP relies on standard Linux networking. So in the case of NIC bonding - you would need to configure bonding as you would via Linux. However until NIC bonding is supported with XDP and AF_XDP then the change won't be reflected in CNDP.

The last patch series I've seen re NIC bonding is here - as far as I'm aware there hasn't been any update

nickcreate2021 commented 1 year ago

Hi, CNDP relies on standard Linux networking. So in the case of NIC bonding - you would need to configure bonding as you would via Linux. However until NIC bonding is supported with XDP and AF_XDP then the change won't be reflected in CNDP.

The last patch series I've seen re NIC bonding is here - as far as I'm aware there hasn't been any update

Hello, Thanks for you reply, I know the Linux bond interface support xdp。In the K8S cloud environment,when VF OR SF attached to POD,if we do the bond using linux bond,the bond interface just has one queue,The application cannot effectively use multi-core and multi queue technology,Do you agree with me ?

maryamtahhan commented 1 year ago

Hi

I have not played around with the bonding and AF_XDP but according to the Kernel documentation: By default the bonding driver is multiqueue aware and 16 queues are created when the driver initializes...

And I just tried to create a bonded interface with 2 veth slaves and I see multiple queues:

$cat /sys/class/net/bond1/queues/
rx-0/  rx-1/  rx-10/ rx-11/ rx-12/ rx-13/ rx-14/ rx-15/ rx-2/  rx-3/  rx-4/  rx-5/  rx-6/  rx-7/  rx-8/  rx-9/  tx-0/  tx-1/  tx-10/ tx-11/ tx-12/ tx-13/ tx-14/ tx-15/ tx-2/  tx-3/  tx-4/  tx-5/  tx-6/  tx-7/  tx-8/  tx-9/

I also tried to load (not at the same time) a vanilla XDP prog and an AF_XDP redirect program on the bonded interface. Both progs loaded as expected on the bonded interface however the bonded slaves didn't have anything loaded on them when I checked with bpftool and xdp-loader... (and TBH I wasn't expecting the AF_XDP redirection program to be mirrored, but I did think that something would be loaded on the slaves from the vanilla bpf prog based on the bonding tests)...

So, this means that you would need to load another XDP prog on the VF(slaves) to redirect to the bond master and then use AF_XDP on the bond master... (again, I have not tested this so I am just speculating). I'm also unsure if there would be any intricacies for the AF_XDP TX part at this time. I need to think on it more...