Open gh0st42 opened 1 year ago
The thought has been to potentially move to not having movement tied to nodes at all, and potentially allow movement in the general sense and be whatever nodes you like. Maybe even supporting multiple inputs (ideally with no overlap).
Not sure when or if there would be time to formally put effort into this. Alternatively you can drive movement over the gRPC API and that would allow you to drive it how you like, but is a different approach.
Appreciate the thoughts and input.
Having general movement and not limiting to specific node types would make sense, actually I just found out that this is not already the case when trying to move "unlinked" regular nodes via a ns2 movement trace :)
Maybe just use a ns2 movement file and move whatever node ID is specified there over the canvas.
It should not matter whether it is a switch, hub, wlan, docker or PC node.
It's the responsibility of the user providing the trace to correctly map them to the coreemu node IDs.
Starting from the XML scenario file, movement(s) should probably be directly on the top level, as they are independent of all other nodes and affect the whole <scenario>
.
Using gRPC or one of the cli tools is always possible for external tools as we did with core-automator
but it's harder to maintain as the gRPC and cli tools interface change quite a bit.
Plus, it is nice to have something out of the box to ship self-contained scenarios and let them run headless.
As the code for ns2 movements is already in the code base, it would make sense to use this as a standard. For more advanced stuff, one can still just use the gRPC interface to move nodes directly. We currently do exactly that to do live position updates from a UAV simulation environment, and it works good enough, but we needed to change our movement "syncing" code for each new core release as the gRPC interface changed a bit with every release. Furthermore, external processes also have downsides as one needs to check if the simulation is running and in a state to start moving the nodes, the session ID where to apply the movements is needed, etc. If the movement is specified in the scenario file, everything becomes a bit easier for automated experiments.
And while I personally think ns2 movements are a very strange and archaic format, they are kinda standard and guarantee interop with other simulators such as ns3, the-ONE, OMNET++ etc. Also, there are many tools such as bonnmotion to easily generate these movements or convert to them, so keeping compatibility should be one of the goals imo.
Is your feature request related to a problem? Please describe.
Currently, movement is tied to specific WLANs. If I have a bunch of nodes n1-n10 and p1-p-10 which have their own WLANs but the nodes a1-a3 are part of both WLANs I get a problem with mobility as the mobility is applied from the different WLANs. I cannot add a "ghost" WLAN which does only node movement because I need to link the nodes to the WLAN node before the ns2 movements are applied. This then adds extra interfaces in the nodes, I have to remember to reduce the range and bandwidth to 0 etc etc.
Describe the solution you'd like A special node type that just lets me load a ns2 movement trace and applies it to all nodes in the scenario. Maybe even without linking just by going after the provided IDs.
Describe alternatives you've considered Having pure MOBILITY node types that require linking could also work as we would not need to remap different ns2 traces but core could automatically map them to the linked nodes of the MOBILITY node.
Additional context We often apply different movement patterns to different host groups, merging them into one big ns2 file. Many of our nodes are part of different networks and need to bridge between them or have different link characteristics on the interface, e.g., bluetooth, lora and WiFi.
Having extra MOBILITY nodes would also make it visually much more clear how and where to configure mobility in a scenario. WLAN and EMANE having mobility but WIRELESS not is sometimes confusing :)