Closed vasilvv closed 2 months ago
Yes , the timestamp is media payload specific and should be scoped inside media container. Relays can use priority and delivery order to make forward/drop decisions. Our goals should be making relays as media agnostic as possible
Yeah, I believe that using the delivery order is more powerful than a TTL based on the timestamp. I want to make sure there's consensus first.
One thing to note is that we might want to specify a timestamp to support discontinuities (#15) without rewriting the media. The playlist contains a timestamp, effectively used as a base offset, so the CDN does not need to constantly rewrite the media bitstream with the correct timestamp.
In the case of Twitch, advertisements always start with a PTS of 0. When we want to display an advertisement, we can insert the advertisement segments into the playlist with a discontinuity, allowing us to use the same segments across any broadcast and improving the cachability.
I think there is two different things here. One is we need some sort of timestamp that lets you know how to deliver it to the decoder. The relays don't need to know about how to present it so any information about presentation times can be in encrypted payload part, but the envelop needs something about delivery order.
And then we have a separate things, which is how long to keep this before it useless. This allows the relays to get rid of stuff that is of no use and allows the application to indicate how long it wants to be billed for storage on the relay. This could be a delta time or expired time but we probably need something that indicates a time the relays can start ignoring this information. If there is no caching, perhaps this can be avoided but still good to be able to toss out stuff that is too old to be useful even before starting the priority stuff. This does not remove the need for priority, just is a pre filter on it that allows application to tell relays useful information.
And then we have a separate things, which is how long to keep this before it useless. This allows the relays to get rid of stuff that is of no use and allows the application to indicate how long it wants to be billed for storage on the relay.
Will the business model of such relay charge based on the storage? From what I know, most current Live streaming CDN system are charging for the total media delivery throughput(viewers * bitrate) which is meaningful to the customer. After all, the customer pays for its live content to be delivered instead of being stored in some kind of cold storage.
That being said, I agree the delivery order and the expiration/lifetime is two different kinds of timestamp. We should not expose both them in one form of raw media timestamp and hope that the relay/endpoint derives the correct dropping and cache purging behavior by its own logic. Better way is to define a clear command in the control message(metadata) so that relay does not need to understand the media/codec specific logic.
There's no timestamp in the current draft, and some of the discussion here is going to delivery timeouts. Closing as this has been overtaken by events. Please open a new issue if you think there's something that needs to be addressed here.
The draft currently defines a timestamp layer property that's the min-PTS of the bitstream in the layer. I find those kind of parameters somewhat concerning, because the timing information at the layer level may contradict what's in CMAF (this is already a problem for certain codec/container situations that usually results in odd implementation-defined behavior).
I believe that
order
should be enough, and we should remove the timestamp (the current text in the draft suggests its actual utility is currently unclear)