Open djlwilder opened 2 months ago
--- configure-ovs.sh 2024-09-20 15:29:03.160536239 -0700
+++ configure-ovs.sh.patched 2024-09-20 15:33:38.040336032 -0700
@@ -575,8 +575,8 @@
# But set the entry in master_interfaces to true if this is a slave
# Also set autoconnect to yes
local active_state=$(nmcli -g GENERAL.STATE conn show "$conn")
- if [ "$active_state" == "activated" ]; then
- echo "Connection $conn already activated"
+ if [ "$active_state" == "activated" ] || [ "$active_state" == "activating" ]; then
+ echo "Connection $conn already activated or activating"
if $is_slave; then
master_interfaces[$master_interface]=true
fi >
The attached kernel traces show the the NetworkManager interaction with the bonding driver when configure-ovs.sh is run. The bonding-trace-fixed.txt has the patch installed. The trace file bond-trace-broken.txt shows how the MAC of the slaves are left set to the same value.
Bonded network configurations with mode=active-backup and fail_over_mac=follow are not functioning due to a race in /var/usrlocal/bin/configure-ovs.sh.
Steps: NetworkManager Profiles: (/etc/NetworkManager/system-connections)
cat bond0.nmconnection
[connection] id=bond0 type=bond autoconnect-priority=-100 autoconnect-retries=1 interface-name=bond0 multi-connect=1 [bond] fail_over_mac=follow mode=active-backup [ipv4] method=manual address=192.168.42.6/24,192.168.42.1 dns=192.168.42.1 [ipv6] dhcp-timeout=90 method=auto
cat enP32807p1s0.nmconnection
[connection] id=enP32807p1s0 type=ethernet autoconnect-priority=-100 autoconnect-retries=1 interface-name=enP32807p1s0 master=bond0 multi-connect=1 slave-type=bond wait-device-timeout=60000
cat enP32807p1s0.nmconnection.backup
[connection] id=enP32807p1s0 type=ethernet autoconnect-priority=-100 autoconnect-retries=1 interface-name=enP32807p1s0 master=bond0 multi-connect=1 slave-type=bond wait-device-timeout=60000
When the node is booted, the initial start-up of the configuration (before ovs-configuration.service has run), the bonded configuration works fine.
ip a s
2: enP32807p1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether 32:f4:c4:ec:23:00 brd ff:ff:ff:ff:ff:ff 3: enP49154p1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether 32:f4:ca:4e:53:01 brd ff:ff:ff:ff:ff:ff 8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 32:f4:c4:ec:23:00 brd ff:ff:ff:ff:ff:ff inet 192.168.42.6/24 brd 192.168.42.255 scope global noprefixroute bond0 valid_lft forever preferred_lft forever inet6 fe80::30f4:c4ff:feec:2300/64 scope link noprefixroute valid_lft forever preferred_lft forever ......
However after ovs-configuration.service has run, the network in no-longer functioning.
2: enP32807p1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether 32:f4:c4:ec:23:00 brd ff:ff:ff:ff:ff:ff 3: enP49154p1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether 32:f4:c4:ec:23:00 brd ff:ff:ff:ff:ff:ff permaddr 32:f4:ca:4e:53:01 8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000 link/ether 32:f4:c4:ec:23:00 brd ff:ff:ff:ff:ff:ff 9: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 32:f4:c4:ec:23:00 brd ff:ff:ff:ff:ff:ff inet 192.168.42.6/24 brd 192.168.42.255 scope global noprefixroute br-ex valid_lft forever preferred_lft forever inet6 fe80::30f4:c4ff:feec:2300/64 scope link noprefixroute valid_lft forever preferred_lft forever
At this point the MACs of the bond's slaves (enP32807p1s0,enP49154p1s0) are the same. The purpose of fail_over_mac=follow is to insure the MACs will not be the same. This is preventing the bond from functioning. This initially appeared to be a problem with the bonding driver, after tracing the calls NetworkManager is making to the bonding driver I discovered the root of the problem is in configure-ovs.sh.
The function: activate_nm_connections() attempts to activate all its generated profiles that are not currently in the "active" state. In my case the following profiles are activated one at a time in this order: br-ex, ovs-if-phys0, enP32807p1s0-slave-ovs-clone, enP49154p1s0-slave-ovs-clone, ovs-if-br-ex
However the generated profiles have autoconnect-slaves set, therefore when br-ex is activated ovs-if-phys0, enP32807p1s0-slave-ovs-clone and enP49154p1s0-slave-ovs-clone's state changes to "activating", as we are only checking for the "activated" state these profiles may be activated again. As the list is walked, some of the profile's state will automatically go from activating to active. These interfaces are not activated a second time leaving the state of the bond in an unpredictable state. I am able to see in the bonding traces why both slave interface have the same MAC.
My fix is to check for either activating or active states.
--- configure-ovs.sh 2024-09-20 15:29:03.160536239 -0700 +++ configure-ovs.sh.patched 2024-09-20 15:33:38.040336032 -0700 @@ -575,8 +575,8 @@
But set the entry in master_interfaces to true if this is a slave
Additional environment details (platform, options, etc.): Environment: IBM Power-VM Kernel: 5.14.0-284.82.1.el9_2.ppc64le
oc version
Network interface: Mellanox Technologies ConnectX Family mlx5Gen Virtual Functions (SR-IOV). NetworkManager Profiles: (/etc/NetworkManager/system-connections)