Closed guvenc closed 4 months ago
The following scenarios are tested by using dpdk-testpmd and their results are summarized as following: 1) create bonding port and make PF0, PF1 its member. Setting this port in the promicious mode, the DPDK application is able to receive pkts by polling its rx queue; 2) Set the above bonding port in the isolation mode, and install isolation rules for IPinIPv6 pkts to capture them into its rx queue (similar to what we did for non-offloading mode). Installation reports no error, but cannot see these pkts. I may need to re-test it, but it is how it behaviours so far; 3) Installing port forwarding rule (bonding port -> vf) in the transfer eswitch domain leads to installation error. By examing driver code, bonding port is another virtual device similar to a vtap device, and actions in the transfer domain are not supported (yet).
The impression is that this feature facilitates the sharing of mac, IP addresses for two PFs, but fundamentally no breaking changes are implemented. We could rely on the LACP/LAG module of DPDK to replace the current WCMP algorithm in dpservice, but I do not see the necessity for it.
Reference document: [1] https://doc.dpdk.org/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
Will close this as archive as a future reference in case the question needs a revisit.
Investigate whether Nvidia Cards LAG (Link Aggregation) and DPDK LAG can be somehow used together with rte_flow. OVS seems to support LAG together with DPDK rte_flow. Investigate how this is done.
https://doc.dpdk.org/guides/prog_guide/link_bonding_poll_mode_drv_lib.html https://stackoverflow.com/questions/69084047/is-there-any-patch-for-hardware-lag-mode-implementation-in-ovs-dpdk