heronarts / LX

Core library for 3D LED lighting engines
https://chromatik.co/
Other
41 stars 25 forks source link

DDP packet size #8

Closed bbulkow closed 6 years ago

bbulkow commented 6 years ago

Hi Mark - thanks for putting extra code in for the DDP packet issue I ran into.

I am wondering if you really want to go this direction, framework-wise. By simply exposing "push" and allowing data offsets, the application code has to know the underlying network transport packet size, which I consider more of a framing issue.

If, instead, LXDatagram has a list of the packet field, and called socket.send() on each of them in order, that would seem to allow an individual LXDatagram subclass to have multiple packets.

Alternately, exposing a method for 'send' which is default mapped to socket.send() but could be overridden would seem efficient.

I believe this creates complexity in setting RGB values for Output, too, but I haven't thought through that.

As a final thought, a very simple hack would be to expose a "maximum size" on LXDatagram that allows application code - which, now to be transport independent, has to always include a loop to generate multiple datagrams, in every application with the new scheme - but at least could use a value exposed by the LXDatagram , who knows about its own framing and protocol problems.

As you know, Squared supported both FadeCandy ( OPC via TCP, no frame problems ), so I just keep thinking about how groovy LX is as a framework, how it comes with a lot of "stuff", and one of those things is Output....

I am also considering whether to avoid the problem by adding another NDB. I've got a ton of workarounds, I'm just thinking out loud about your framework....

Food for thought.... and I'm still evaulating the shift from the pre-LX Studio world to the LX Studio world.... still a matter of time and energy....

tracyscott commented 6 years ago

I think what you want is a higher level abstraction to model your destination device and then just assign a chunk of LXPoints to that destination device object and then have that destination device object build your set of datagrams? So there would be a DDPDestination output class and it would build a collection of DDPDatagrams and then defer to LXDatagramOutput for sending? We are using ArtNet to PixLite controllers so our equivalent would be mapping a set of LXPoints to an ArtNetDestination with just a start universe and optionally a number of universes. If number of universes is not specified, then just compute it based on 170 per universe. Otherwise distribute the points evenly across the universes. That would make getting started easier for people. We wouldn't actually use it though because I put in code to remap universes in case somebody wires up something incorrectly so I have to build my output datagram-by-datagram. Actually, if ArtNetDestination also optionally took an int array with universe numbers we could still use it and support universe remapping for handling wiring problems (we have some controllers high off the ground so fixing it in software is easier).

mcslee commented 6 years ago

Architectural flavor here. LXDatagram is meant to be a thin wrapper around a single UDP packet (e.g. the Datagram class in Java), which formats the datagram appropriately for whatever protocol is being used - but it's not intended to be any smarter than that or to perform any higher-level logic.

If, instead, LXDatagram has a list of the packet field, and called socket.send() on each of them in order, that would seem to allow an individual LXDatagram subclass to have multiple packets.

This is essentially what the LXDatagramOutput class is doing - it is responsible for sending a set of datagrams (packets) in a defined order: https://github.com/heronarts/LX/blob/master/src/heronarts/lx/output/LXDatagramOutput.java#L80

This question of how and when to frame is not really a generic thing - it's quite protocol specific. The way the push and data offset fields work in DDP is particular to DDP. ArtNet universe size limitations are specific to ArtNet/DMX, etc. I think what you are pointing towards here that could make sense would be something like a Factory/Builder pattern, a helper class that takes a big set of points and constructs a set of Datagrams according to some heuristics (those could be limitations of the protocol, frame size limits, or provided hints about where to split based upon physical wiring, etc.). I'd imagine that as a separate DDPDatagramBuilder class which then produces a list of DDPDatagram objects.

I am also considering whether to avoid the problem by adding another NDB.

Yep, another common solution and many ArtNet projects similarly have multiple receivers on different IP addresses.

High-level, a place I would eventually like to get to is moving output mapping out of code and into the LX Studio UI. This is obviously a non-trivial project and I can't say when I might actually get to that, but the UI could offer tools to select chunks of points and specify what IP address they're going to, via which protocol, etc. and in which order all of this stuff happens (essentially what is done in hand-written code at the moment).

In any case - hope this provides some context on the architecture. If you wanted to make a DDPDatagramBuilder for your use case here I think that'd be pretty sensible.

And I just threw in a little utility that makes breaking an LXFixture into chunks of point indices of a maximum size a bit easier, maybe that'll be useful here: https://github.com/heronarts/LX/commit/3a58f102e1c47de1ebf0cd6faa400ee7b20e300d

bbulkow commented 6 years ago

I agree about architectural flavor, and of course it's your architecture.

Let me make my case one more time, with code...

What we're talking about is adapting to one rather low level change - a network MTU setting, which only applies on some Datagrams types, and has different break points depending on the gear involved ( although realistically has only two settings, 1500 and 9.9k). I certainly don't want to change how my LXFixtures work every time I run in a different network environment. And, hey, maybe Mark L. will support DDP frag in his boxes, in which case the code goes away.... which means this kind of split would be sensitive to DDP receiver version ( sigh ).

With your scheme, the best I can come up with is something more like:

class Output {
  static List<NewDDPDatagram> clusterDatagram(Cluster cluster) {
    List<Bulb> bulbs = cluster.getBulbs();
    int[] pointIndices = new int[bulbs.size()];
    int pi = 0;
    for (Bulb bulb : bulbs) {
        pointIndices[pi++] = bulb.index;
    }
    if ( pointIndices <= 400 ) {
        return new ArrayList<> ( new NewDDPDatagram(pointIndices) );
    }
    else {
        dgs = new ArrayList<NewDDPDatagram>();
        for (int i=0; i<pointIndices.length; i+=400) {
            dg = new NewDDPDatagram(pointIndices);
            dg.setDataOffset(i);
        }
        dgs[ dgs.size() - 1 ].setPushFlag(true);
        return(dgs);
    }      
  }
}

This code is consistent with your flavor, and isolates the grotty bits to at least a spot where it can be found, a location that is specific to DDPDatagram. I would expect, in the future, that every application would need to carry List around, because there is no knowledge of how many datagrams one might use on an individual network.... and changing network conditions would require a code change right around Output, but would at least be isolated there.... if an LXDatagram is meant to be an exact map and supports being socket.send()'ed, everywhere that uses LXDatagram really needs to be coded to List.

A minor improvement would be the '400' that you see there, where one can use a field or method to find the carrying capacity of a particular LXDatagram.

What's the other architecture look like?

LXDatagram could (for the protocols that require it ) be a little more robust, and encapsulate several underlying packets. I could argue, from a theoretical perspective, that a UDP datagram fragments, these are like fragmentation - a DDPDatagram is fragmenting to UDP packets, instead of transmitting a large UDP packet which is fragmenting. This is a lower level choice that is appropriate to make at around this layer, below LXDatagram. Only the protocols / implementations with this problem pay the performance price.

LXDatagram, as defined, is actually very close. In that architecture, the send() method would have to exist on LXDatagram, can be defined in LXDatagram as socket.send() but overridden --- and maybe LXDatagram has to know about socket, come to think of it. The two new methods you've created aren't necessary. No set_push would be required on LXDatagram, nor the indexes, nor ( if you take my idea above ) exposing the Datagram size. A single LXDatagram could with confidence be used with far larger sized models (up where you hit things like universe limits).

I realize there is complexity here --- notably the fact that people chunk their LXPoints up because of DMX universes anyway, and some people ( like DDP users ) don't have to, etc. Maybe my idea is dumb, but I am an old networking guy and layering right in this area I've dealt with far too much.

I appreciate the need to make an architectural choice and move on, just thought I'd get it down for the record. Thanks for listening.

bbulkow commented 6 years ago

@tracyscott Yes, I am well putting those collections together. The issue is that the DDPDatagram supports about 500 LXPoints per DDPDatagram if I am using 1500 byte packets, or about 3,300 points if I am using Jumbo Frames. LEDs simply don't light up if I put more on the controller and don't massage driver parameters, and I am trying to choose between more code complexity ( which it turns out even reaches into the framework ) and requiring particular network gear.

I found this out when I wired up my 720-LED controller and my 240-LED controllers and, surprise, the 720's didn't work... unless I used a more modern laptop.... and changed a driver parameter.... then everything worked super great ( very fast too ).

DDP simply has different constraints than ArtNet, because the DDP creator wanted to do away with the legacy "universe" concept in order to simplify, which is a concept I can get behind, but it turns out his scheme for easily supporting more LXPoints per controller is somewhat thwarted by the problem of those controllers not supporting UDP fragmentation.

Hopefully that makes more sense.

mcslee commented 6 years ago

Thanks for writing all that out - I think we're more in agreement than not here, just a question of the structure. I am pretty on board with something that looks like the sample code you wrote, as a Builder pattern.

I think the reason I prefer for that to be separate is a question of architectural layers and how I am thinking about what an LXDatagram is.

By analogy, the Datagram class in Java is a UDP packet. The Datagram class itself does not expose an awareness of what the underlying network hardware may be (either the local driver settings, or limitations of a remote device). As you point out, UDP packets may be fragmented. But this fragmentation is not at the UDP protocol level, it's at the transport level. The Datagram class doesn't expose that.

I certainly don't want to change how my LXFixtures work every time I run in a different network environment. And, hey, maybe Mark L. will support DDP frag in his boxes, in which case the code goes away.... which means this kind of split would be sensitive to DDP receiver version ( sigh ).

This is why I feel this logic should be separate from the DDPDatagram class. In my view, the DDPDatagram class as simply an implementation of a DDP packet, with options to format the content of that packet.

As you point out, there are numerous network factors that constrain the function of the DDP protocol (i.e. ways that sending valid DDP packets will not result in expected behavior). I just don't think the packet class itself is the place to compensate for that.

If the DDPDatagramBuilder feels awkward here, I'd also be entirely cool with a DDPOutput class, which would be a subclass of LXDatagramOutput, that just takes the big list of points and also is told what kind of chunk size to use. Internally, this class would use DDPDatagram and its methods - but from an application-programmer's POV, it would behave like you're describing (one API object for my whole cluster, and I just need to give it the right hints about framing).

There are some other relevant options here, such as the question "do you only want the push flag set on the final datagram?" Which should also be specified by the user (for instance, if your installation has modular components, perhaps you don't want to wait to push them all at once and want each packet to update its section of the sculpture immediately).

That's another reason why I want to keep these separate - I'd like to maintain a simple API such that a programmer that wishes to completely format their own DDP packets may do so. Alongside a separate API for a programmer that wants to configure DDP output at a higher-level and have it "just work."

I just went ahead and whipped up what I have in mind for this: https://github.com/heronarts/LX/commit/38c5f489b1178886c5bf660863f305d9c8789784

If this better matches what you were imagining, feel free to grab that for your project.

but it turns out his scheme for easily supporting more LXPoints per controller is somewhat thwarted by the problem of those controllers not supporting UDP fragmentation.

FYI - I believe that DDP also supports TCP on the NDBs, which may be part of the rationale on the huge frames. But IMO it is still preferable to use chunked UDP... on a stable wired network (as is typical for these types of installations) that's going to be quicker than the TCP overhead and no need to deal with statefully detecting and recovering from connection failures.

mcslee commented 6 years ago

Closing this one out per the proposed DDPOutput solution - feel free to re-open if you have any more issues with this!