Open viniarck opened 9 months ago
Looks like it's an optimization implementation detail on OvS. I've explored the following experiments, pushing this flow with "dl_vlan": "10/4095
:
❯ curl -X POST http://127.0.0.1:8181/api/kytos/flow_manager/v2/flows/00:00:00:00:00:00:00:01 -H 'Content-type: application/json' -d '{"flows": [{"priority": 100, "match": {"in_port": 1, "dl_vlan": "10/4095"}}]}'
1 - On NoviFlow, it didn't optimize away the mask:
[FLOW_ID1]
Timestamp = Tue Nov 7 17:14:14 2023
TableId = 0
ofp_version = 4
ControllerGroup = vinicius-ctrl
ControllerId = 1
Priority = 100
Idle_timeout = 0
Hard_timeout = 0
Packet_count = 0
Byte_count = 0
Cookie = 0
Send_flow_rem = true
Persistent = false
[MATCHFIELDS]
OFPXMT_OFB_IN_PORT = 1
OFPXMT_OFB_VLAN_VID = 10(OFPVID_PRESENT=1) & 4095
Also, on wire both FlowMod and FlowStats reply were as expected:
1 - On OvS 3.2.0
, it optimized away the mask on flow stats reply, which arguably makes sense since it's also an exact match at the end of the day, but this would lead to a different flow.id since we're computing the id based on the serialization of the mask:
When using an all 1s (exact match) mask, for now the safest approach when sending the flow is to omit it, otherwise to support platform differences, when computing flow ids we'd need to much extra conditionals excluding those from the id hash calculations, which wouldn't be that trivial either. On NoviFlow both options are working as expected though.
cc'ing @Alopalao @italovalcy for their information
Interestingly enough on OvS
though via its cli setting vlan_tci
it shows an explicit mask, but it doesn't return it on multi part reply:
sudo ovs-ofctl add-flow s1 table=0,vlan_tci=12/0x1fff,actions=output:"s1-eth4" -O OpenFlow13
❯ sudo ovs-ofctl -O OpenFlow13 dump-flows s1
cookie=0xab00000000000001, duration=223.489s, table=0, n_packets=414, n_bytes=17388, send_flow_rem priority=50000,dl_vlan=3799,dl_type=0x88cc actions=CONTROLLER:65535
cookie=0xac00000000000001, duration=99.872s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=50000,dl_src=ee:ee:ee:ee:ee:02 actions=CONTROLLER:65535
cookie=0xac00000000000001, duration=99.871s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=50000,dl_src=ee:ee:ee:ee:ee:03 actions=CONTROLLER:65535
cookie=0x0, duration=181.767s, table=0, n_packets=0, n_bytes=0, vlan_tci=0x000c/0x1fff actions=output:"s1-eth4"
cookie=0x0, duration=4.094s, table=0, n_packets=0, n_bytes=0, send_flow_rem priority=100,in_port="s1-eth1",dl_vlan=10 actions=drop
If you were to install these two flows on OvS directly,
12/4095
and14/4094
, notice below that it's deparsing theFlow
runtime structure asmatch: {'dl_vlan': 12}
, which isn't necessarily wrong, and can be thought as an optimization, but this will result in consistency check issues when generating theflow.id
. Comparing with the dumped output of OvS even a all 1s mask it keeps it, so maybe our serialization should be explicit too:I'm also gonig to compare with NoviFlow switches for completeness cc'ing @Alopalao. Aldo has encountered this when implementing vlan range on https://github.com/kytos-ng/mef_eline/pull/396/files#r1384185684