NASA-AMMOS / AIT-GUI

MIT License
24 stars 12 forks source link

Backend Reimagining #75

Closed MJJoyce closed 5 years ago

MJJoyce commented 6 years ago

After planning and integrating new features that touch the telemetry stream and chatting with users about toolkit functionality I've started feeling like the current backend design is lacking. Thoughts and ideas below for a potential rework of the way our backend functions. Please note, this is a massive brain dump of ideas. Thoughts and critique is definitely appreciated.

Currently our backend is divided up into two major chunks:

As such, we have a bit of a gap in between the standard AIT functionality and the SLE functionality. Specifically, the telemetry stream has to be AIT Packets and this can be limiting depending on the setup a project needs. This works nicely if a project is dumping telemetry in a single Packet and those fit nicely in a frame (so no real "telemetry processing" is necessary), etc.

There also isn't a convenient location to insert components that need access to the telemetry stream(s). For instance, a component that takes all telemetry and archives it to PCaps or inserts into a database needs to be "hard coded" into the pipeline. Likely the most elegant approach we can take currently is the one we took with the notification system whereby users call a function as part of their GUI bootup script to enable this on the backend. You can see the code for creating a GUI session that a greenlet monitors as part of the notification system as an example.

The gap between encoded command vs expected format needed by project can be overcome via extensions of ait.core.cmd classes while not being too onerous. However, making this more config driven could be useful, especially for different testing setups or something similar and for making it "just work" with the SLE interfaces out of the box.


To the Future

I think there are two places to which we could make significant usability improvements while keeping a config / code structure that somewhat resembles our current approach.

Plug-able backend

At a high level this would be changing ait-gui to ait-server. We would change the backend from "receive telemetry and send to front end / send commands to port" to a system that handles the multiplexing of telemetry streams and command output to various plugins. Plugins would implement some (to be determined) interface that allows them to receive all telemetry and send output to the server.

Imagine an example config for ait-server that runs plugins for:

ait-server:
    plugins:
        - ait-gui
        - ait-datastore-writer
        - openmct-bridge

ait-datastore-writer:
    backend: InfluxDB
    username: foo
    password: bar

Stream Rework

The current stream system configures the GUI to receive a particular Packet type on a given port. There's also some flexibility for configuring if the stream is of a certain type (E.g., unsegmented CCSDS packets or the default "raw").

Creating a system where by a telemetry stream can be piped through "handlers" that perform some task on the telemetry would allow us to support customizable data flows while also providing an easy entry point for user customization.

Suppose we have a system where we receive telemetry on 3 different ports depending on the setup we're running or testing. While not necessarily realistic, hopefully it drives home the point. We'll have streams that:

ait-server:
    plugins:
        - ait-gui
        - ait-datastore-writer
        - openmct-bridge

    inbound-streams:
        - stream:
            name: raw_ehs
            port: 3076
            handlers:
                - ait-packet-handler:
                     packet: Example_1553_EHS

        - stream:
            name: ccsds_stream
            port: 3077
            handlers:
                - ccsds-packet-handler:
                     apids:
                         1: Example_1553_EHS
                         2: Some_Other_Packet

        - stream:
            name: sle_data_stream
            port: 3078
            handlers:
                - tm-trans-frame-decode-handler
                - ccsds-packet-handler:
                     apids:
                         1: Example_1553_EHS
                         2: Some_Other_Packet

ait-datastore-writer:
    backend: InfluxDB
    username: foo
    password: bar

Similar to the above telemetry streams, being able to customize how we wrap encoded commands could also be useful. You could imagine an outbound-streams configuration option that allows you to take encoded commands, pass them through one or more handlers, and finally output them to a port. This would allow us to have different output streams with different formats so you could, for example, have an output stream that encodes as necessary to send to FSW for testing while also having a stream to encode and wrap as necessary for delivery for emission, testing with a radio, etc.

MJJoyce commented 6 years ago

@lorsposto @jordanpadams @ldahljpl

I know this is a massive brain dump but thoughts, critiques, ideas, etc. would be great!

MJJoyce commented 5 years ago

Misc. messaging queues to poke through and consider nanomsg https://github.com/tonysimpson/nanomsg-python

Kombu https://github.com/celery/kombu

Pica https://github.com/pika/pika

aywaldron commented 5 years ago

Notes and links on various messaging libraries:

Library/protocol comparisons to consider:

Message format size comparisons

aywaldron commented 5 years ago

@MJJoyce and I discussed 3 different ideas for dealing with a divergence in multi-step stream handling, i.e. a case where at some point in the handling process, a stream needs to be handled by 2 or more handlers in parallel which will produce separate outputs. The 3 different approaches we discussed are described below with an example config for a stream that would first be handled by a tm-trans-frame-decode-handler and then the output of that handler would be handled by the ccsds-packet-handler and parallel-handler in parallel, producing 2 separate outputs.

  1. Stream handlers that should be executed sequentially would be nested (indented) in the config, while stream handlers that should be executed in parallel would be listed sequentially at the same indentation level. There would be one stream per input that could have multiple outputs. The flow of each input would be encapsulated in a single stream; multiple handling processes would be represented in a single stream.
    - stream:
    name: sle_data_stream
    port: 3078
    handlers:
        - tm-trans-frame-decode-handler
            - ccsds-packet-handler
            - parallel-handler
  2. A different stream would be defined for each divergent handling process, which would execute the entire process. With this approach, work done by shared handler steps before the process divergence may be repeated in different streams. The flow of each handling process would be encapsulated in a single stream. All handlers in each stream would be executed sequentially and would be listed sequentially at the same indentation level. There could be multiple streams per input, and each stream would have exactly one output.
    
    - stream:
    name: sle_data_stream
    port: 3078
    handlers:
        - tm-trans-frame-decode-handler
        - ccsds-packet-handler
aywaldron commented 5 years ago

@MJJoyce and I have worked out this rough design for the new AIT Server. The ZeroMQ proxy manages the subscriptions between many publishers and subscribers to solve the discovery problem (http://zguide.zeromq.org/py:all#The-Dynamic-Discovery-Problem). Dashed arrows show the flow of telemetry. The 3 different proxies inheriting from the 0MQProxy in the diagram show the 3 different data flows that the proxy manages - in implementation, all 3 functionalities will likely be managed by a single proxy class. We are considering having streams manage their handlers using ZeroMQ PUB and SUB sockets as well.

screen shot 2018-11-29 at 3 17 26 pm