bitfocus / companion

Bitfocus Companion enables the reasonably priced Elgato Stream Deck and other controllers to be a professional shotbox surface for an increasing amount of different presentation switchers, video playback software and broadcast equipment.
http://bitfocus.io/companion
Other
1.61k stars 503 forks source link

[Feat] Daemon for interaction with OS and hardware #2849

Open dnmeid opened 6 months ago

dnmeid commented 6 months ago

Is this a feature relevant to companion itself, and not a module?

Is there an existing issue for this?

Describe the feature

Today we have more or less no possibility to interact with hardware connected to the computer or the OS. Only exception are the surfaces which are handeled by core or via Satellite, but with some limitations. The proposed feature is to have a daemon per OS which handles all that. That means:

That way Companion core can be a network-only application and the whole system can be more distributed and flexible.

Usecases

Julusian commented 6 months ago

I have more thoughts, which I will write out later.

But a big one is, should modules run in companion or the daemon? Let's assume that we could auto deploy modules to the daemon (for the v3 ones, we definitely could) I think some modules will need to run on this daemon, for example one which needs a serial port. Because the module could be using an existing npm library which doesn't give us the flexibility of injecting a proxy for the serial port access, or simply would end up with terrible performance due to waiting for promises which are now dependent on network latency. I feel like we could say that modules always run in a daemon, which by default would be that local one

dnmeid commented 6 months ago

But a big one is, should modules run in companion or the daemon?

My first answer would be in Companion , but I'm open for any discussion. If the module runs in the daemon, the daemon would need to be able to run javascript and whatever we may support in the future. I don't have a strong aversion to this, but it will make the daemon definitely less lightweight. If the module runs in the daemon, one module can use only resources of the daemon it is running on. Again I don't feel this is a big con. In my opinion the proxying part is a big pro of it, trying to make stuff available to more than one module. If the module is fine with using a resource by an interface provided by the daemon, what would be the pros/cons of running it in Companion or the daemon? If the module relies on a library which uses a resource out of our control, what would be the pros/cons then? One con of running the complete module in the daemon may be, that the module then needs to be OS dependent, where one of my goals is to make as much as possible OS independent in Companion and provide it to the modules in a standardized way. The OS dependent stuff should be done only once for the daemon itself. I would like to have a functionality for modules who need to use some OS dependent stuff and need to manage it themself, e.g. using a SDK, but I feel like this will only a small amount of modules.

Julusian commented 6 months ago

Another feature that I think should be supported from the beginning:


daemon provides access to OS stuff like:

It sounds like you want to provide an abstraction/friendly api over these things. For many of these, I'm not sure how worthwhile that will be for us to do, given that some modules will be using libraries that expect direct access. But this is a similar thing to the TCPHelper and related classes, any modules without an existing library use these, and any module that does use a library will not.

I'm not entirely opposed to the idea, but I don't think it will work in every scenario. And I would still like to allow modules to be written in other languages one day, I have no idea how these abstractions would impact that.


daemons on same or on remote computers should be automatically detected and connected to not add another layer of complexity for the user

daemons on the same should be auto-connected to. daemons on remote computers should be listed in the ui as available to be connected to, but I don't think it should auto-connect. We definitely need to have some form of security in the protocol these talk, otherwise it is a bigger security hole than companion is today. And connecting to everything will often be undesirable, in my experience it is common for every operator to have their own companion install, on a shared network. Having every companion be automatically linked would be annoying. But this could definitely be made simple to 'pair', such as the user entering the 4 digit code provided by the daemon.


daemon should be performant and lightweight, probably not electron based

I am open to this, but if not nodejs what languages would you consider?

For those unaware, the current model of companion is:

If you run headless companion (eg companionpi), then the launcher layer is skipped and systemd runs the nodejs process which is companion.

Julusian commented 6 months ago

Another reason I have for running the modules in the daemon, is that it allows for a better distribution of processing. Instead of needing a single large powerful companion machine which runs everything, you can push the modules out to smaller 'compute' nodes. And you could run these daemons close to what they are controlling. It will also allow for

Perhaps you are doing a show which has some vmix in aws, an atem and other stuff in the studio and another vmix and streamdecks with the operator at home. You could run companion in aws, with the built in daemon talking to the aws vmix. Add another daemon in the studio which talks to the stuff there, and a third daemon with the operator which will talk to their vmix and add their streamdecks to the system. With this model, only the minimal amount of data will need to be pushed between the daemons and companion. Not the full vmix data each poll, just whatever the module wants to report back. And no risk of the atem connection timing out due to network latency, as that part happens locally.

And by running the modules in the daemon, if you connected two different companions to the daemon, they could both be able to run actions on the running connections/instances simultaneously. So this would give the benefit of sharing a limited network resource with multiple companions.


If the module runs in the daemon, the daemon would need to be able to run javascript and whatever we may support in the future.

Yes it would, but only when the user specifies a connection/instance should be run on that daemon. The v3 module system was designed from the beginning to mean that the modules don't have to be nodejs, and dont have to be the same version of nodejs. Which I think will work in our favour, as I think it means the same could be said for the other direction too (other than one small issue which could be resolved)

If the module runs in the daemon, one module can use only resources of the daemon it is running on. Again I don't feel this is a big con.

I agree. To clarify, I think that a particular connection/instance of a module should be run on a daemon. So if that resource limitation is a problem, then I would question why a connection/instance needs physical presence on two separate machines, to me that sounds like an odd scenario.

In my opinion the proxying part is a big pro of it, trying to make stuff available to more than one module. If the module is fine with using a resource by an interface provided by the daemon, what would be the pros/cons of running it in Companion or the daemon?

The only pro/con I have currently is in favour of the daemon is that it makes things more predictable. For example:

await streamdeck.drawKey(0, ....)
await streamdeck.drawKey(1, ....)

This is a very simple case that expects to drawing to a streamdeck as fast as possible. But if the hid proxy it is using is being run over a vpn, then a drawKey call which locally would take 5ms might now take 105ms. Lets ignore whether that has any real impact on how the first write behaves, that depends on what the source of this was. But that second write is waiting for the first to complete before it begins, so is now starting to execute 10ms later than it would have.

Unless usages of these proxies are carefully crafted to consider this latency, they will have a notable impact on how they 'perform'. For a streamdeck, this could worst case manifest as the drawing being slow with a rolling effect as our code iterates through all the buttons. Or best case it would mean that we are limited to a much slower fps of drawing each button.

But if the proxies are being used only locally, then I doubt there is a significant enough impact to care about.

If the module relies on a library which uses a resource out of our control, what would be the pros/cons then?

Yeah, in this case, then a module will only be able to access the resources on the same machine. And means that the rules enforced by the OS on ownership/exclusivity would have to be respected.

To me, the value in these proxies is to:

One con of running the complete module in the daemon may be, that the module then needs to be OS dependent, where one of my goals is to make as much as possible OS independent in Companion and provide it to the modules in a standardized way. The OS dependent stuff should be done only once for the daemon itself.

Yes, but I think the current model works sufficiently well for this. We require the native libraries which interact with these things to be written such that they are OS independent, then we ship all of those as part of the module. This has resulted in me doing some work to make a native library conform to this structure, but some have been fine without needing any work.

dnmeid commented 6 months ago

The languages I'd consider are:

Yes, I have some abstraction for this stuff in mind. I think there will be more or less one generic module using each resource. e.g. generic-midi, generic-keyboard (replacing vicreo-listener). But these modules will be used a lot. Some resources may be used by a little more different modules like serial, but I guess most of them will be able to use the provided abstraction.

My first idea was to make this daemon lightweight, but more powerful than e.g. vicreo-listener. So it seems to me that the main drawback of allowing modules and a lot of stuff to be computed in the daemon is the buildup of size and complexity. I'm fine with that if it will be possible to still run that daemon from a raspi with maybe 3 streamdeck XL and GPIO as it is also meant to replace satellite. The advantages of running modules in the daemon seem reasonably to me.

Julusian commented 6 months ago

I would rather not use C++/Qt, I do enough C++ that it is a language I wouldn't choose for anything new. I would be happy with rust, and I could accept go. I've not used either of them for something like this though, but I'm happy to give it a try.

Since I did the split of companion from the launcher, I have been thinking that it would be a good idea to rewrite the launcher so that it isn't electron, but I haven't been able to justify the effort or the need for another language in companion to myself yet. But that should probably follow whatever decision gets made for this. Potentially could be used as a test project to make sure we don't hate the chosen framework, longer term having them match would be good for consistency and ease of maintenance.

I would be tempted to use a similar architecture in the daemon with a 'launcher' process, and then the main process being nodejs. Even if we don't use nodejs for that, unless we want to switch to a rust/go/whatever streamdeck library, then in a majority of cases this daemon will still need nodejs to be able to do streamdecks.


Yes, I have some abstraction for this stuff in mind. I think there will be more or less one generic module using each resource

I think it depends. For something like keyboard, then yeah I doubt there will be more modules which use that. For midi, I can imagine that various midi modules might be useful. I know that some old yamaha desks only support physical midi as a control protocol, so it wouldnt be entirely unreasonable to have a module which exposes that in a nicer way than would be possible through the generic-midi.

So I am still a bit unsure on what can/should be abstracted, but I am open to it. It also sounds to me like we don't need to conclude on that before the rest of this can be started, as those abstractions will most likely become additions to the module-api, and don't have to be done at the same time as the daemon is created. And I guess that as part of this daemon, #2041 should also be done so that the code for the surface handling is packaged and is distributable similarly or the same way as the modules


Some other slightly more UX thoughts:

When I list it like that, it feels like a lot of work and some duplicated work. But I maintain that this would be a good architecture. Essentially it boils it down to the daemon acts as the io layer, without any companions connected it should be able to remember connections it runs and run them. Then Companion connects to these daemons and provides the mapping of this io to the buttons/grid and triggers, and is still the brains.

phillipivan commented 6 months ago

Sorry to butt into this conversation. However, I can see another potential advantage / use case to running the modules on said daemon which I'd like to submit for consideration (apart from the hardware interaction)...

Occasionally a protocol will involve broadcast or multicast messaging, in which case, short of some fairly involved network trickery, it is necessary to be in the same network segment as the devices you are talking to. Similarly it is sometimes necessary to locate control in the same network segment as the controlled device because (a) the protocol specified no keep-alive (and thus is poorly suited for routing across firewalls) or (b) the device can not have a Gateway configured (I've seen this most on devices with multiple NICs, where only one of those NICs can have a Gateway configured).

Running modules in these daemons may open avenues for working with these devices in more complex network architectures, since the daemon could be located logically proximate to the devices and communicate back to the companion host via a more sane TCP connection.