Closed brindosch closed 5 years ago
I'm ok with multiple tcp ports - I'm not sure trying to magically determine the protocol used gains much. Separate ports makes it a bit easier to troubleshoot with a packet capture.
The udp listener is just raw led traffic so you couldn't reliably auto detect the protocol. And of course being udp, rather than tcp won't work anyway 8-)
The gain, for the moment:
My main problem is, it's going to be confusing and it might be useful to rework this before you higher the complexity
It was not my intention to wrap udp and tcp, which is impossible. My question is: Are you interested in an udp specific wrapper to talk with the API to get access to all functions.
It's a nice idea and it realy makes stuff esier for users. As developer it's a bit harder to trace errors and so on ...
but ...
My main concern is how stable we get the protocol detecting. Or in other words - it must be at minimum rock solid! currently you can provoke a crash when you accidently exchange json/proto ports. If we have data that is not detectable, because they are header less or the protocols looking the same - then this won't work.
E.g. if someday we want hyperion act as a hue (or whatever smart bulb) that is controlled via tcp/json then we get in trouble, because our control api and the "hue server" has same protocol header. Of course we can evaluate the content and detect that those commands for hue server and hyperion control looks different, but this is a complete overkill!
The security benefit isn't so much, because you can globaly configure a black/white list for clients (without specifiying the port) and then apply this list to all servers/ports. We can put this in a kind of helper function, so we don't write dup code.
I looked into boblight server. For that you must look into first message and this must start with one of this words: hello ping get set sync. I wouldn't call that a protocal header. It would be good if every opening message (the first message, that establish connection) sends any identifier (like http) - in this case perhaps "<protocol/boblight version=5 />" or so ...
just my thoughts. perhaps we should think of this a bit more ....
provoke a crash when you accidently exchange json/proto ports
Something i didn't thought about. Of course, it solves this too, if detection works well.
it must be at minimum rock solid!
Of course, everything else is undefined up to crash
if someday we want hyperion act as a hue (or whatever smart bulb) that is controlled via tcp/json then we get in trouble, because our control api and the "hue server" has same protocol header
Of course, and the current solution is spawning another server. I like the idea of custom headers to make clear what the client wants to do.
A custom header should also work for proto connections failsafe. With http we won't run into issues, also the websocket handshake is always performed with a new socket, so there is no danger that a http connection wants to upgrade AFTER it has been determined as http static file serving.
Custom Protocol I think it would be useful to have a response that the client developer knows everything worked well. client sends: <protocol/json version=1/> Now send back the request, if we found a handler server sends: <protocol/json version=1/> If not found, send back error message and close socket server sends: <protocol/unknown/>
Sounds robust so far!
Downsides
Pro
Okay, let's go!
My brief thoughts before bed...
I'm not sure the time investment is worth it in comparison to other things we could do.
As an ex network engineer I really don't like multiple protocols on the same port - we support multiple protocols for different purposes - maybe we should take a step back and evaluate all of them and be clear why we have so many. A) json for easy programmatic control. B) protobuf for efficient bulk data C) http for people ! D) websocket for more dynamic aspects of C) E) udp raw, dmx, e131 for interoperability with other systems F) bob light - like E but does any use it ?
I'm not sure the time investment is worth it in comparison to other things we could do
Yes. It is no problem currently, but this might be handy in future. If you want to (or need, see below) create a "new api version" because of changed syntax. Or adding experimental stuff for testing to keep the "stable" api clean. Currently you need to add a JsonSever v2 if you want to introduce a changed api syntax or you drop support for the old which breaks everything (annoying for devs and users). Compare it with REST services where you determine the target api by url /api_v2/
As an ex network engineer I really don't like multiple protocols on the same port
I can understand this, you would group HTTP (websocket + static file serving) and keep proto/json where it is. But as described above there are limits you may hit very fast when you start progressing and expand the api or add more network services. The port management is annoying esp. when it might grow. I think we won't drop the multiple instances anytime soon to support more than one led device, which is my primary complain of using so much ports.
Let's say we add a "api handshake" for the possible progression, the port merge is at this time just a "little" step.
be clear why we have so many
I don't know, because when is a protocol a protocol? Where is the line to merge them?
So if i look at this, it remembers me for the led device providers. Just with receive than sending, we have a lot (LOT) of stuff which is simple json, additional data is available or not. So i talk about the TcpServerProvider which redirects the clients to their actual needs :)
Problem We already face the issue of migration when you have no way of directing the clients to different handlers. Proto3 for example, to prevent issues with v2, you need both (clean solution/deprecation). But you are not able to select them from data streams. It is up to the protocol how to deal with upgrades, but if the protocol won't provide a clean way, what should you do?.
After all it breaks everything but just once with huge flexibility for... whatever might come. So my thumbs up is more for the future and multiple instances. At the very end, tcp socket is tcp socket.
joining http and websocket makes sense. I would add json control via websockets. Raw json stuff should be possible to detect, because the other protocols (http/ws) are very clear.
Proto stuff: hmm is there a way we can get rid of it? I know, it breaks compat, but wit h.ng we do it anyway. So proto we only need it, because it's fast enough. All important customers are in our hand: standalone grabbers and hyperiond. We can replace it with binary streaming over tcp. Just a little header to determine protocol/version ... The intention of that new protocol is only to transfer the captured frames, no extra commands - no dup with json interface (except image transfer). Good thing is we can drop a nasty dependency :-)
Then we have the let's call it device specific network server/clients. We have a zoo of network devices and with all those smart-home-light-stuff we will get more. This is good to separate and activate only when needed. I didn't want to tinker with those protocols, because especially vendor specific stuff (e.g. hue) can have unpleasant surprises. Atm we send the data, but I can image that we implement receiving as well. (e.g. hyperion as hue/mii bulb :-), or controlled by artnet, ... )
ok to say it short:
(and we can thing of what is needed to support multiple devices on one host without those instances. Then in the long run hopefully no voodoo is needed to get a stripe and a bulb running)
Raw json stuff should be possible to detect, because the other protocols (http/ws) are very clear
As part of a "legacy" period, a protocol definition won't hurt as long runner.
Proto stuff: hmm is there a way we can get rid of it? I know, it breaks compat, but wit h.ng we do it anyway.
I think getting rid of isn't so bad. With the latest issues that rise a break seems not avoidable. So replace with a own proto inspired byte stream shouldn't be a problem(?), which doesn't overlap in all parts with the json interface.
Like it, time for a cleanup :)
+1 for dropping the insanely overkill protobuf code!
But I would advice against rolling your own proto implementation as it can get ugly pretty quick. https://github.com/google/flatbuffers looks promising. You get all the cool things cross platform / language support and blazing fast serialization (compared to protobuf) and the best thing, all of this with only a minimal performance overhead as claimed by google. (Disclaimer: that needs to be tested with the actual code first).
Tested it and it appears to be working just fine i can manage a stable 130 packets/s on 128*128 (at 130fps) with 50% cpu on a raspberry pi 3 using ~50 MBit/s, not sure what else to test. (rewrote the kodi addon too)
Sounds good! Would be great if you can submit a pullrequest which replaces protobuf. (An probably the updated kodi addon for better testing)
Thank you!
I will prepare poc trees on my github for both the addons and the updated server code (some stuff is heavy wip). We should probably create a separate issue for this. Also first time working with kodi so i hope i got everything right.
Great, the shape of the Kodi addon isn't that great. I simply don't like the structure/handling. Will open an issue for flatbuffer receiver implementation also with improvements compared to protobuf now.
The Kodi Addon is indeed a second building zone.
I will close this. We got some of the things discussed here. If there is more demand we can open a new issue
API unification Hyperion has a lot of different ways for communication. Sadly it is in a whole very incomplete. Depending on origin source data you need different processing wrappers before you can talk to a final hyperion interface. Currently it's this:
I want to merge all functions into one api class. So the final structure should look like this.
No clue about udp, yet. For sure @penfold42 can leave some words :) This has also some more interesting stuff at the backhand:
TcpServer merge I also want to bring in something different, our massive usage of TcpServer (ports). I won't think about encrpyted TcpServer! ports x2(?) This are: HTTP Server, Json-Server, protoServer and if you follow the qt5websocket mindset: Websockets So the final structure would be this together with the API unification:
They all have just some tinys diffs. In order to decide which handler should be used, the first message will be awaited and a sock->peek() will create a copy of the buffer. This copy will be examined for the following:
So the Handlers do now the following:
Test I successfully captured the first readyRead signal from the webserver in order to peek the message, and forward to the http processing. However you need to trigger the slot of the readyRead signal the first time by hand as the first readyRead has been captured by the webserver. Another idea is to hand over the buffer in a additional argument.
@penfold42 @redPanther @Paulchen-Panther Ich hoffe das ist lesbar!
Leave your thoughts