Closed italovalcy closed 1 year ago
@italovalcy thanks for looking into this. Yes, that's was an unexpected side effect.
I'll update the function to set to an unreachable port as they've suggested in this discussion. Locally explored it to confirm and indeed it behaved as expected only forcing the handshake again:
❯ sudo ovs-ofctl -O OpenFlow13 dump-flows s1
cookie=0xac00000000000001, duration=104.822s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=50000,dl_src=ee:ee:ee:ee:ee:02 actions=CONTROLLER:65535
cookie=0xac00000000000001, duration=104.821s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=50000,dl_src=ee:ee:ee:ee:ee:03 actions=CONTROLLER:65535
cookie=0xaaa5949e826a664e, duration=153.887s, table=0, n_packets=4, n_bytes=280, send_flow_rem priority=5000,in_port="s1-eth1" actions=push_vlan:0x88a8,set_field:6157->vlan_vid,output:"
s1-eth4"
cookie=0xaaa5949e826a664e, duration=153.883s, table=0, n_packets=4, n_bytes=296, send_flow_rem priority=5000,in_port="s1-eth4",dl_vlan=2061 actions=pop_vlan,output:"s1-eth1"
cookie=0xaaa5949e826a664e, duration=153.740s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=5000,in_port="s1-eth2",dl_vlan=3738 actions=pop_vlan,output:"s1-eth1"
cookie=0xab00000000000001, duration=83.109s, table=0, n_packets=110, n_bytes=4620, send_flow_rem priority=1000,dl_vlan=3799,dl_type=0x88cc actions=CONTROLLER:65535
kytos $> 2022-12-02 10:53:49,095 - INFO [kytos.core.atcp_server] (MainThread) Connection lost with client 127.0.0.1:40012. Reason: Request closed by client
2022-12-02 10:53:49,116 - INFO [kytos.core.atcp_server] (MainThread) New connection from 127.0.0.1:60880
2022-12-02 10:53:49,140 - INFO [kytos.napps.kytos/of_core] (thread_pool_sb_3) Connection ('127.0.0.1', 60880), Switch 00:00:00:00:00:00:00:01: OPENFLOW HANDSHAKE COMPLETE
❯ sudo ovs-vsctl set-controller s1 tcp:127.0.0.1:6666; sudo ovs-vsctl set-controller s1 tcp:127.0.0.1:6653; sudo ovs-ofctl dump-flows s1 -O OpenFlow13
cookie=0xac00000000000001, duration=108.764s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=50000,dl_src=ee:ee:ee:ee:ee:02 actions=CONTROLLER:65535
cookie=0xac00000000000001, duration=108.763s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=50000,dl_src=ee:ee:ee:ee:ee:03 actions=CONTROLLER:65535
cookie=0xaaa5949e826a664e, duration=157.829s, table=0, n_packets=4, n_bytes=280, send_flow_rem priority=5000,in_port="s1-eth1" actions=push_vlan:0x88a8,set_field:6157->vlan_vid,output:"
s1-eth4"
cookie=0xaaa5949e826a664e, duration=157.825s, table=0, n_packets=4, n_bytes=296, send_flow_rem priority=5000,in_port="s1-eth4",dl_vlan=2061 actions=pop_vlan,output:"s1-eth1"
cookie=0xaaa5949e826a664e, duration=157.682s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=5000,in_port="s1-eth2",dl_vlan=3738 actions=pop_vlan,output:"s1-eth1"
cookie=0xab00000000000001, duration=87.051s, table=0, n_packets=112, n_bytes=4704, send_flow_rem priority=1000,dl_vlan=3799,dl_type=0x88cc actions=CONTROLLER:65535
Hi,
I was troubleshooting the issues that occurred in the e2e below:
One of the reasons why this error above happened was: once we ran the reconnect_switches function, all flows are removed.
See this discussion here: https://mail.openvswitch.org/pipermail/ovs-discuss/2014-May/033712.html
Proof of concept:
The sleep shows that after some time the flows are recreated (due to the consistency check). The point is: right after changing the controller, the flows are removed.
I believe the expected behavior for reconnect_switches() is just to reset the connection to the controller (to force the consistency check to run). Removing the flows seems to be an unexpected behavior.