kytos-ng / pathfinder

Kytos main path finder Network Application (NApp)
https://kytos-ng.github.io/api/pathfinder.html
MIT License
0 stars 7 forks source link

link.metadata deletions can leave the graph metadata with old values #54

Closed viniarck closed 1 year ago

viniarck commented 1 year ago

link.metadata deletions can leave the graph metadata with old values until the next kytos.topology.updated event is published. I ended up hitting this when analyzing and trying to reproduce issue #53, it turns out that self.graph.update_link_metadata(link) is only setting the values, since it was originally meant to be used right after the graph got cleared, which always happens when processing a kytos.topology.updated, but with kytos/topology.links.metadata.(added|removed), when a metadata gets removed it'll send the remaining (current) metadata, and it'll leave behind metadata keys in the graph edges until a next kytos.topology.updated is received to update the graph from scratch again.

Here's an example of deleting "bandwidth" link metadata, but on graph edge it's still there:

kytos $> 2023-07-06 14:37:32,502 - INFO [uvicorn.access] [httptools_impl.py:506:send] (MainThread) 127.0.0.1:47700 - "DELETE /api/kytos/topology/v3/links/c8b55359990f89a5849813dc348d30e
9e1f991bad1dcb7f82112bd35429d9b07/metadata/bandwidth HTTP/1.1" 200
kytos $>                                                                                                                                                                                 

kytos $> for u, v in controller.napps[('kytos','pathfinder')].graph.graph.edges: 
    ...:     print(f"u: {u}, v: {v}, {controller.napps[('kytos', 'pathfinder')].graph.graph[u][v]}") 
    ...:      
    ...:  
    ...:  
    ...:      
    ...:                                                                                                                                                                                 
u: 00:00:00:00:00:00:00:01, v: 00:00:00:00:00:00:00:01:4294967294, {}
u: 00:00:00:00:00:00:00:01, v: 00:00:00:00:00:00:00:01:4, {}
u: 00:00:00:00:00:00:00:01, v: 00:00:00:00:00:00:00:01:1, {}
u: 00:00:00:00:00:00:00:01, v: 00:00:00:00:00:00:00:01:2, {}
u: 00:00:00:00:00:00:00:01, v: 00:00:00:00:00:00:00:01:3, {}
u: 00:00:00:00:00:00:00:01:4, v: 00:00:00:00:00:00:00:03:3, {'bandwidth': 100, 'ownership': 'red'}
u: 00:00:00:00:00:00:00:01:3, v: 00:00:00:00:00:00:00:02:2, {}
u: 00:00:00:00:00:00:00:03, v: 00:00:00:00:00:00:00:03:4294967294, {}
u: 00:00:00:00:00:00:00:03, v: 00:00:00:00:00:00:00:03:1, {}
u: 00:00:00:00:00:00:00:03, v: 00:00:00:00:00:00:00:03:2, {}
u: 00:00:00:00:00:00:00:03, v: 00:00:00:00:00:00:00:03:3, {}
u: 00:00:00:00:00:00:00:03:2, v: 00:00:00:00:00:00:00:02:3, {}
u: 00:00:00:00:00:00:00:02, v: 00:00:00:00:00:00:00:02:4294967294, {}
u: 00:00:00:00:00:00:00:02, v: 00:00:00:00:00:00:00:02:1, {}
u: 00:00:00:00:00:00:00:02, v: 00:00:00:00:00:00:00:02:2, {}
u: 00:00:00:00:00:00:00:02, v: 00:00:00:00:00:00:00:02:3, {}

To keep it simpler to maintain, including minimize potential race conditions and locks, then it'd be more appropriate to only keep kytos.topology.updated, and then whenever a link metadata changes on topology it also sends a kytos.topology.updated since a link metadata is meaningful, since it can represent a logical metric that's relevant to pathfinder, we can still publishing kytos/topology.links.metadata.(added|removed) on topology to avoid breaking compatibility though.

kytos $> 2023-07-06 14:41:01,397 - INFO [kytos.napps.kytos/of_core] [main.py:708:update_port_status] (MainThread) PortStatus modified interface 00:00:00:00:00:00:00:01:4 state 0        
2023-07-06 14:41:01,397 - INFO [kytos.napps.kytos/of_core] [main.py:708:update_port_status] (MainThread) PortStatus modified interface 00:00:00:00:00:00:00:01:4 state OFPPS_LINK_DOWN
2023-07-06 14:41:01,398 - INFO [kytos.napps.kytos/of_core] [main.py:708:update_port_status] (MainThread) PortStatus modified interface 00:00:00:00:00:00:00:03:3 state 0
2023-07-06 14:41:01,399 - INFO [kytos.napps.kytos/of_core] [main.py:708:update_port_status] (MainThread) PortStatus modified interface 00:00:00:00:00:00:00:03:3 state OFPPS_LINK_DOWN
2023-07-06 14:41:01,401 - INFO [kytos.napps.kytos/mef_eline] [main.py:721:handle_link_down] (thread_pool_app_6) Event handle_link_down Link(Interface('s1-eth4', 4, Switch('00:00:00:00:0
0:00:00:01')), Interface('s3-eth3', 3, Switch('00:00:00:00:00:00:00:03')), c8b55359990f89a5849813dc348d30e9e1f991bad1dcb7f82112bd35429d9b07)
2023-07-06 14:41:02,599 - INFO [kytos.napps.kytos/of_core] [main.py:708:update_port_status] (MainThread) PortStatus modified interface 00:00:00:00:00:00:00:01:4 state OFPPS_LIVE
2023-07-06 14:41:02,601 - INFO [kytos.napps.kytos/of_core] [main.py:708:update_port_status] (MainThread) PortStatus modified interface 00:00:00:00:00:00:00:03:3 state OFPPS_LIVE
2023-07-06 14:41:04,610 - INFO [kytos.napps.kytos/mef_eline] [main.py:707:handle_link_up] (thread_pool_app_6) Event handle_link_up Link(Interface('s1-eth4', 4, Switch('00:00:00:00:00:00
:00:01')), Interface('s3-eth3', 3, Switch('00:00:00:00:00:00:00:03')), c8b55359990f89a5849813dc348d30e9e1f991bad1dcb7f82112bd35429d9b07)
kytos $>                                                                                                                                                                                 

kytos $> for u, v in controller.napps[('kytos','pathfinder')].graph.graph.edges: 
    ...:     print(f"u: {u}, v: {v}, {controller.napps[('kytos', 'pathfinder')].graph.graph[u][v]}") 
    ...:      
    ...:  
    ...:  
    ...:      
    ...:                                                                                                                                                                                 
u: 00:00:00:00:00:00:00:01, v: 00:00:00:00:00:00:00:01:4294967294, {}
u: 00:00:00:00:00:00:00:01, v: 00:00:00:00:00:00:00:01:4, {}
u: 00:00:00:00:00:00:00:01, v: 00:00:00:00:00:00:00:01:1, {}
u: 00:00:00:00:00:00:00:01, v: 00:00:00:00:00:00:00:01:2, {}
u: 00:00:00:00:00:00:00:01, v: 00:00:00:00:00:00:00:01:3, {}
u: 00:00:00:00:00:00:00:01:4, v: 00:00:00:00:00:00:00:03:3, {'ownership': 'red'}
u: 00:00:00:00:00:00:00:01:3, v: 00:00:00:00:00:00:00:02:2, {}
u: 00:00:00:00:00:00:00:03, v: 00:00:00:00:00:00:00:03:4294967294, {}
u: 00:00:00:00:00:00:00:03, v: 00:00:00:00:00:00:00:03:1, {}
u: 00:00:00:00:00:00:00:03, v: 00:00:00:00:00:00:00:03:2, {}
u: 00:00:00:00:00:00:00:03, v: 00:00:00:00:00:00:00:03:3, {}
u: 00:00:00:00:00:00:00:03:2, v: 00:00:00:00:00:00:00:02:3, {}
u: 00:00:00:00:00:00:00:02, v: 00:00:00:00:00:00:00:02:4294967294, {}
u: 00:00:00:00:00:00:00:02, v: 00:00:00:00:00:00:00:02:1, {}
u: 00:00:00:00:00:00:00:02, v: 00:00:00:00:00:00:00:02:2, {}
u: 00:00:00:00:00:00:00:02, v: 00:00:00:00:00:00:00:02:3, {}

Another benefit of using only using kytos.topology.updated on pathfinder is that we don't also need the reconciliation topology event in the first place, so it can also contribute to fix other underlying problems that were inherent from having this responsibility split over multiple handlers. So, it can indirectly fix issue #53.