Open jgoerzen opened 4 years ago
I'm pretty sure this library is probably only appropriate as it is now for medium bandwidth games protocols over relatively high bandwidth links. Nothing in this library currently even attempts to handle congestion control at all, so I definitely wouldn't try to use anything in this library as for example a general purpose replacement for TCP / SCTP / QUIC. Those implementations of standard protocols are almost certainly going to be better engineered for general use than anything this library will probably ever have.
The reason for this library's existence is mostly because I found that on many platforms, multiplexing different streams using TCP or using SCTP / QUIC is a royal PITA. You can't really rely on being able to use huge numbers of TCP ports, and the scope of existing SCTP implementations and integration with Rust is pretty grim, and QUIC is very early and heavyweight. The realistic options that I saw open to me were to try to adapt a rust QUIC implementation OR make my own protocol, and I took the second option. I don't really have to worry about the general use case, I'm trying to make a high player count networked game, so I really only personally care about that sort of use. This library is designed for when the amount of bandwidth the game needs is much less than the bandwidth of the underlying link, so it sort of tries to optimize above all else for reduced latency. It sounds like you need the opposite, a library for reliable connections that optimizes above all else for low bandwidth.
Now, if something like plain TCP is inappropriate for this case, or if in your environment real TCP is difficult for some reason and you need your own protocol, it might be useful to take a look at the way the reliable channel in turbulence is implemented, and maybe get some inspiration? I think PROBABLY as it is the existing reliable channel is too optimized for latency and basically not at all optimized for bandwidth use (it always sends one ack packet per received packet, it constantly re-sends a small bit too much data in the expectation that the other side will process enough by the time it gets there, it could have a LOT better bandwidth efficiency!). It might be sensible to use the multiplexing parts of this library with your own bandwidth sipping reliable protocol? The existing reliable protocol is, imo, pretty self contained and hopefully easy to follow, but obviously I'm pretty biased here. The only two files you need to understand the reliable protocol are the file that defines the send / receive buffers (windows) (https://github.com/kyren/turbulence/blob/master/src/windows.rs), and the channel implementation itself (https://github.com/kyren/turbulence/blob/master/src/reliable_channel.rs). I'm not some genius internet protocol inventor, this is just kind of baby's first TCP, it was just remarkably hard to find pre-existing implementations of such things that don't come with a whole world of things I'll never need bolted on top and impossible to remove.
I imagine for low bandwidth though that you'd have a hard time beating TCP itself, so if you can figure out how to use a pre-existing TCP implementation that might be the first thing to try?
Thank you very much for this explanation!
So what drew me to your project is that it's not tied to UDP in the way so many others (the various QUIC implementations, for instance) are. In my case, some of the devices I'm working with (eg, LoRA radios) have a serial interface (often packetized) with no IP stack whatsoever. The overhead of running TCP/IP across a very slow link is significant due to the TCP and IP headers, and the performance is generally quite bad. PPP with VJ header compression helps somewhat and it gets halfway decent, but it turns out that protocols from the modem era do pretty well here. I'm probably going to go re-implement the UUCP protocol "i" in Rust and see how that goes.
Totally understood about QUIC. Lots of good things about that protocol, but simplicity isn't one of them.
Oh okay that makes sense, I didn't think about at such low packet sizes and bandwidth limits that just the header overhead is significant! I too wish there were more pluggable network things to do reliability and such which is kind of another reason this project exists (if there were some pluggable TCP like thing already I might have just used that).
Hi,
This isn't an issue, but I couldn't find contact information for you anywhere.
First, thanks for writing this. It looks interesting!
I am interested in very low-bandwidth, long-distance radio links. These may be via amateur radio protocols, via LoRA (my own lorapipe, https://github.com/jgoerzen/lorapipe works with that), or satellite. Bandwidth of these links ranges, depending on the technology in play, from about 0.3Kbps to 100Kbps. Many of them are actually packetized already, and may provide guarantees roughly similar to UDP (error-checked but not reliable, though most guarantee ordering if not delivery). So you can perhaps see why I'm interested in this protocol. Those that are packetized would tend to use packets ranging from about 32 bytes up to maybe several hundred bytes, with an ethernet-like MTU of 1500 being a rather high-end outlier.
My question is: what is the overhead of your reliable protocol when working with a pure binary stream? Also, do you think it would be suitable for situations in which it may actually take a substantial fraction of a second for a packet to be transmitted (or maybe even several seconds for the very slowest)? When dealing with a 300bps link, every added byte definitely counts!
Thanks!