mwelzl / draft-iccrg-pacing

Other
1 stars 2 forks source link

Time scales (or: application layer) #11

Open mwelzl opened 2 weeks ago

mwelzl commented 2 weeks ago

The "Sammy" paper: https://dl.acm.org/doi/10.1145/3603269.3604839 paces at a larger time scale - avoiding to transmit video data as a bulk but instead spreading it out. This can be done with or without RTT-timescale pacing underneath, and should be discussed as a separate thing IMO.

Perhaps the right angle is to talk about different time scales, or it is about being at the application layer (because this is really application payload specific).

mwelzl commented 1 day ago

A text donation from Ingemar Johansson:

Frame based transmission, for instance based on output from video coders possess the property that video frames are typically generated at regular intervals. For example a video coder that encodes video at 50 frames per second, outputs a frame every 20ms. This frame is typically split up in MTU sized packets. Video coding typically also encodes frames of varying size, depending on the complexity of the input signal.
This has the consequence that the packet pacing cannot be tuned to match the nominal bitrate, as this would mean that large video frames are unnecessarily delay on the sender side. SCReAM (https://datatracker.ietf.org/doc/draft-johansson-ccwg-rfc8298bis-screamv2/ ) as an example paces, packets at a 50% higher rate than the nominal bitrate. This has the effect that an average video frame in the example above is transmitted in 20/1.5 = 13.3ms. This reduces the risk that video frames are delayed unnecesarily other than when the video frame is unusually large.

mwelzl commented 13 hours ago

From Grenville Armitage, this is a different way of looking at it - probably a better phrasing, not about time scales:

Sammy makes target upper-bound rate selection on a per-video-chunk basis, and then relies on underlying TCP packet level pacing to instantiate the per-chunk rate-limit target. So, decoupling the rate target selection logic (also referred to as "application informed pacing" in the paper) from the instantiation of that upper-bound (in this specific case relying on FreeBSD's HPTS subsystem giving us TCP packet granularity pacing, but of course could be any other subsystem achieving similar on-the-wire outcomes).