openvswitch / ovs-issues

Issue tracker repo for Open vSwitch
10 stars 3 forks source link

The performance of the packets ovsdpdk of NAT over float IP does not increase linearly with the increase of PMD threads #234

Open Tongjian-zhang opened 2 years ago

Tongjian-zhang commented 2 years ago

When I tested the forwarding performance of float IP, I found that the NAT traffic passing through ovsdpdk nodes could not increase linearly with the increase of PMD threads, but was evenly distributed? Are some resources mutually exclusive among multiple PMDS?

OVS Version:2.13.4 OVN version:21.03 DPDK version:19.11.10 (ovsdpdk-vswitchd)[root@node08 /root/zhangtongjian/openvswitch-2.13.4]$ ovs-vsctl -V ovs-vsctl (Open vSwitch) 2.13.4 DB Schema 8.2.0

[root@node08 ~]# docker exec -it -u root ovn_northd bash (ovn-northd)[root@node08 /]# ovn-nbctl -V ovn-nbctl 21.03.0 Open vSwitch Library 2.15.0 DB Schema 5.31.0 (ovn-northd)[root@node08 /]# ovn-sbctl -V ovn-sbctl 21.03.0 Open vSwitch Library 2.15.0 DB Schema 20.16.1

(ovsdpdk-vswitchd)[root@node08 /root/zhangtongjian/openvswitch-2.13.4]$ ovs-vsctl list open _uuid : 236bb95d-8911-4f58-8162-b635fb58f409 bridges : [1f7a0c91-b705-44b2-9daf-5a087b6af484, 3e104e83-897e-4fb3-8db8-dc950d5140d0, bb0d89bf-729c-43d0-9a32-3dffb9358eb0] cur_cfg : 79 datapath_types : [netdev, system] datapaths : {} db_version : [] dpdk_initialized : true dpdk_version : "DPDK 19.11.10" external_ids : {ovn-bridge-datapath-type=netdev, ovn-bridge-mappings="physnet1:br-ex", ovn-cms-options="enable-chassis-as-gw,availability-zones=ztj", ovn-encap-ip="123.123.45.8", ovn-encap-type=geneve, ovn-remote="tcp:100.7.50.68:6642,tcp:100.7.50.8:6642,tcp:100.7.50.10:6642", ovn-remote-probe-interval="60000", system-id="7d35377c-6fbd-5fbf-81bb-482a338e42af"} iface_types : [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient, erspan, geneve, gre, internal, ip6erspan, ip6gre, lisp, patch, stt, system, tap, vxlan] manager_options : [] next_cfg : 79 other_config : {dpdk-hugepage-dir="/dev/hugepages", dpdk-init="true", dpdk-lcore-mask="0x1000100", dpdk-socket-mem="4096,4096", emc-insert-inv-prob="1", pmd-cpu-mask="0x8000000"} ovs_version : [] ssl : [] statistics : {} system_type : [] system_version : []

(ovsdpdk-vswitchd)[root@node08 /root/zhangtongjian/openvswitch-2.13.4]$ ovs-appctl dpif-netdev/pmd-perf-show

Time: 09:17:12.930 Measurement duration: 16.401 s

pmd thread numa_id 1 core_id 27:

Iterations: 257678 (63.50 us/it)

(ovsdpdk-vswitchd)[root@node08 /root/zhangtongjian/openvswitch-2.13.4]$ ovs-vsctl list open _uuid : 236bb95d-8911-4f58-8162-b635fb58f409 bridges : [1f7a0c91-b705-44b2-9daf-5a087b6af484, 3e104e83-897e-4fb3-8db8-dc950d5140d0, bb0d89bf-729c-43d0-9a32-3dffb9358eb0] cur_cfg : 79 datapath_types : [netdev, system] datapaths : {} db_version : [] dpdk_initialized : true dpdk_version : "DPDK 19.11.10" external_ids : {ovn-bridge-datapath-type=netdev, ovn-bridge-mappings="physnet1:br-ex", ovn-cms-options="enable-chassis-as-gw,availability-zones=ztj", ovn-encap-ip="123.123.45.8", ovn-encap-type=geneve, ovn-remote="tcp:100.7.50.68:6642,tcp:100.7.50.8:6642,tcp:100.7.50.10:6642", ovn-remote-probe-interval="60000", system-id="7d35377c-6fbd-5fbf-81bb-482a338e42af"} iface_types : [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient, erspan, geneve, gre, internal, ip6erspan, ip6gre, lisp, patch, stt, system, tap, vxlan] manager_options : [] next_cfg : 79 other_config : {dpdk-hugepage-dir="/dev/hugepages", dpdk-init="true", dpdk-lcore-mask="0x1000100", dpdk-socket-mem="4096,4096", emc-insert-inv-prob="1", pmd-cpu-mask="0x8002000"} ovs_version : [] ssl : [] statistics : {} system_type : [] system_version : []

(ovsdpdk-vswitchd)[root@node08 /root/zhangtongjian/openvswitch-2.13.4]$ ovs-appctl dpif-netdev/pmd-perf-show

Time: 09:18:34.850 Measurement duration: 11.544 s

pmd thread numa_id 1 core_id 13:

Iterations: 1713176 (6.72 us/it)

Time: 09:18:34.850 Measurement duration: 11.544 s

pmd thread numa_id 1 core_id 27:

Iterations: 414510 (27.79 us/it)

igsilya commented 1 year ago

When I tested the forwarding performance of float IP, I found that the NAT traffic passing through ovsdpdk nodes could not increase linearly with the increase of PMD threads, but was evenly distributed? Are some resources mutually exclusive among multiple PMDS?

Up until OVS 3.0 connection tracking code contained large critical sections that didn't allow any scalability. The issue supposed to be solved in OVS 3.0 and newer.

BigCousin-z commented 1 year ago

Great,Can you point out which commit solved this problem? I can't find it.

BigCousin-z commented 1 year ago

When I tested the forwarding performance of float IP, I found that the NAT traffic passing through ovsdpdk nodes could not increase linearly with the increase of PMD threads, but was evenly distributed? Are some resources mutually exclusive among multiple PMDS?

Up until OVS 3.0 connection tracking code contained large critical sections that didn't allow any scalability. The issue supposed to be solved in OVS 3.0 and newer.

Great,Can you point out which commit solved this problem? I can't find it.

igsilya commented 1 year ago

It wasn't a single patch, but a patch set. This one: https://patchwork.ozlabs.org/project/openvswitch/cover/165755848353.777605.11624470557538344560.stgit@fed.void/