Open ianswett opened 1 month ago
IIUC, this field is the TTL of per hop instead of end-to-end. The content origin can set the TTL to end-to-end deadline and each hop decrease the value then pass it to the downstream relay. Is that correct?
I'm not sure this PR solves either of the problems we're trying to solve with the cache TTL fields.
Individual Review:
I think this PR needs additional, probably normative, guidance about when a caching relay starts the expiration timer. For example, if using stream-per-object and the object is large, does the timer start when placing the first or last byte of the object in cache. The same is true of other forwarding modes.
Just for notes ... Mo raised key point on call that if we tail drop things in some priority mechanism, then we may never get the end of group marker. We probably need to adjust text to be very careful about how we detect we are never going to get any more data on this group or track
Victor had point that perhaps we should use the start of next group to start timer instead end of current group
Some of the open issues from the last interim
Based on discussion last week and on slack, I believe this is where the WG is heading.
It is possible this could be a per-track or per-subscription value to save bytes on the wire, particularly for the Object per Stream or Object per Datagram modes.
It's possible this value could be decreased by the cache dwell time at each hop, but I didn't hear clear consensus either way on that question.
Fixes #440 Fixes #415