kytos-ng / of_core

Kytos Main OpenFlow Network Application (NApp)
MIT License
0 stars 5 forks source link

Potential issue when parsing/deparsing dl_vlan mask 4095 (all 1's) #125

Open viniarck opened 9 months ago

viniarck commented 9 months ago

If you were to install these two flows on OvS directly, 12/4095 and 14/4094, notice below that it's deparsing the Flow runtime structure as match: {'dl_vlan': 12}, which isn't necessarily wrong, and can be thought as an optimization, but this will result in consistency check issues when generating the flow.id. Comparing with the dumped output of OvS even a all 1s mask it keeps it, so maybe our serialization should be explicit too:

sudo ovs-ofctl add-flow s1 table=0,vlan_tci=12/0x1fff,actions=output:"s1-eth4" -O OpenFlow13
sudo ovs-ofctl add-flow s1 table=0,vlan_tci=14/0x1ffe,actions=output:"s1-eth4" -O OpenFlow13
 {'switch': '00:00:00:00:00:00:00:01',
  'table_id': 0,
  'match': {'dl_vlan': 12},
  'priority': 32768,
  'idle_timeout': 0,
  'hard_timeout': 0,
  'cookie': 0,
  'id': '73582035b74b4b4fbc386829246709f2',
  'stats': {'byte_count': 0,
   'duration_sec': 47,
   'duration_nsec': 362000000,
   'packet_count': 0},
  'cookie_mask': 0,
  'instructions': [{'instruction_type': 'apply_actions',
    'actions': [{'port': 4, 'action_type': 'output'}]}]},
 {'switch': '00:00:00:00:00:00:00:01',
  'table_id': 0,
  'match': {'dl_vlan': '14/4094'},
  'priority': 32768,
  'idle_timeout': 0,
  'hard_timeout': 0,
  'cookie': 0,
  'id': 'bc62b8d8fc20000156553801f9cdc672',
  'stats': {'byte_count': 0,
   'duration_sec': 19,
   'duration_nsec': 878000000,
   'packet_count': 0},
  'cookie_mask': 0,
  'instructions': [{'instruction_type': 'apply_actions',
    'actions': [{'port': 4, 'action_type': 'output'}]}]}]
❯ sudo ovs-ofctl -O OpenFlow13 dump-flows s1
 cookie=0xab00000000000001, duration=358.759s, table=0, n_packets=714, n_bytes=29988, send_flow_rem priority=50000,dl_vlan=3799,dl_type=0x88cc actions=CONTROLLER:65535
 cookie=0xac00000000000001, duration=302.925s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=50000,dl_src=ee:ee:ee:ee:ee:02 actions=CONTROLLER:65535
 cookie=0xac00000000000001, duration=302.924s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=50000,dl_src=ee:ee:ee:ee:ee:03 actions=CONTROLLER:65535
 cookie=0x0, duration=349.364s, table=0, n_packets=0, n_bytes=0, vlan_tci=0x000c/0x1fff actions=output:"s1-eth4"
 cookie=0x0, duration=321.880s, table=0, n_packets=0, n_bytes=0, vlan_tci=0x000e/0x1ffe actions=output:"s1-eth4"

I'm also gonig to compare with NoviFlow switches for completeness cc'ing @Alopalao. Aldo has encountered this when implementing vlan range on https://github.com/kytos-ng/mef_eline/pull/396/files#r1384185684

viniarck commented 9 months ago

Looks like it's an optimization implementation detail on OvS. I've explored the following experiments, pushing this flow with "dl_vlan": "10/4095:

❯ curl -X POST http://127.0.0.1:8181/api/kytos/flow_manager/v2/flows/00:00:00:00:00:00:00:01 -H 'Content-type: application/json' -d '{"flows": [{"priority": 100, "match": {"in_port": 1, "dl_vlan": "10/4095"}}]}'

1 - On NoviFlow, it didn't optimize away the mask:

    [FLOW_ID1]
        Timestamp        = Tue Nov  7 17:14:14 2023
        TableId          = 0
        ofp_version      = 4
        ControllerGroup  = vinicius-ctrl
        ControllerId     = 1
        Priority         = 100
        Idle_timeout     = 0
        Hard_timeout     = 0
        Packet_count     = 0
        Byte_count       = 0
        Cookie           = 0
        Send_flow_rem    = true
        Persistent       = false
        [MATCHFIELDS]
            OFPXMT_OFB_IN_PORT = 1
            OFPXMT_OFB_VLAN_VID = 10(OFPVID_PRESENT=1) & 4095

Also, on wire both FlowMod and FlowStats reply were as expected:

20231107_141509

20231107_141459

1 - On OvS 3.2.0, it optimized away the mask on flow stats reply, which arguably makes sense since it's also an exact match at the end of the day, but this would lead to a different flow.id since we're computing the id based on the serialization of the mask:

20231107_140853

20231107_140729

Conclusion

When using an all 1s (exact match) mask, for now the safest approach when sending the flow is to omit it, otherwise to support platform differences, when computing flow ids we'd need to much extra conditionals excluding those from the id hash calculations, which wouldn't be that trivial either. On NoviFlow both options are working as expected though.

cc'ing @Alopalao @italovalcy for their information

Side notes

Interestingly enough on OvS though via its cli setting vlan_tci it shows an explicit mask, but it doesn't return it on multi part reply:

sudo ovs-ofctl add-flow s1 table=0,vlan_tci=12/0x1fff,actions=output:"s1-eth4" -O OpenFlow13
❯ sudo ovs-ofctl -O OpenFlow13 dump-flows s1
 cookie=0xab00000000000001, duration=223.489s, table=0, n_packets=414, n_bytes=17388, send_flow_rem priority=50000,dl_vlan=3799,dl_type=0x88cc actions=CONTROLLER:65535
 cookie=0xac00000000000001, duration=99.872s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=50000,dl_src=ee:ee:ee:ee:ee:02 actions=CONTROLLER:65535
 cookie=0xac00000000000001, duration=99.871s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=50000,dl_src=ee:ee:ee:ee:ee:03 actions=CONTROLLER:65535
 cookie=0x0, duration=181.767s, table=0, n_packets=0, n_bytes=0, vlan_tci=0x000c/0x1fff actions=output:"s1-eth4"
 cookie=0x0, duration=4.094s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=100,in_port="s1-eth1",dl_vlan=10 actions=drop

20231107_143243

20231107_143302