Closed huitema closed 2 years ago
Not done yet. The big item in the last commit is to remove the limit on datagram sizes for applications -- fragment sizes are now only constrained by packet size, instead of the artificial limit to less than 1200 bytes. But still need to add support for FEC.
Still another half of the extra repeat to do before merging this PR. The current code does the extra repeat if a packet is lost, but not yet if a packet is delayed at the previous hop. Plus and minus:
@suhasHere this is ready to check in. I do have mixed feelings about the "extra copy" trials: they bring quite a bit of complexity, but the advantages are not obvious. I see cases where the extra overhead increases latency. On the other hand, the management of forwarding and retransmission by fragments is valuable. The following data show the average latency of various methods, as measured in the tests "triangle_datagram_loss" and "datagram_triangle_extra". The latency is measured from time of sending to time of possible delivery, itself defined as the maximum of the time of delivery of the previous packet and the time of arrival of this packet. Numbers are in millisecond. The statistics skip the first 4 objects in the series, because delays there are influenced by the starting conditions.
Repeat units | Repeat mode | Extra | Average | Max | Stdev |
---|---|---|---|---|---|
Objects | Stream | 54 | 239 | 37 | |
Fragments | Stream | 43 | 186 | 25 | |
Fragments | Datagram | 44 | 109 | 22 | |
Fragments | Datagram | on loss | 48 | 142 | 22 |
Fragments | Datagram | after delay | 43 | 134 | 22 |
Making the decision on fragments rather than frames allows for end-to-end reassembly. Sending the repeated fragments as datagrams avoids head of line blocking on the "control" stream. But adding extra copies for redundancy does not seem to improve results in this particular test.
In the previous version, losses were corrected by sending "Repair" packets on the control stream. This has proven sub-optimal, see issue #61. This PR manages uses list of pending datagrams on any media stream, and insures that when datagrams are marked as lost by the stack they are repeated as datagrams.
The initial commit falls short of the solutions listed in issue #61. The use of FEC for repeated packets or for "old" packets is not implemented. Please don't check the code in now, these fixes will come as the work progresses.