This has come up in a couple of contexts, including the provision of "wait signals" and the insertion of pre-recorded segments into an otherwise live conference application.
The important points are:
The media is pre-recorded (but the media may be available in multiple formats/qualities)
The desired transmission mechanism is RTP
I think this can be achieved by:
Providing a means to create frames based on existing encoded video + metadata
Providing a means to enqueue those frames on an existing RTCRtpSender
Providing a means to take signals from the RTCRtpSender about available bandwidth and requests for new keyframes and have them processed in an application-specific manner
Responses to congestion signals may involve switching the source of frames to a lower quality source (much like DASH does), or it may involve switching the source to a video showing "wait a bit", or it may involve frame decimation of some kind (assuming the signal is encoded in a decimation-compatible format such as an SVC encoding). These decisions don't need to be part of the WebRTC component.
This has come up in a couple of contexts, including the provision of "wait signals" and the insertion of pre-recorded segments into an otherwise live conference application.
The important points are:
I think this can be achieved by:
Responses to congestion signals may involve switching the source of frames to a lower quality source (much like DASH does), or it may involve switching the source to a video showing "wait a bit", or it may involve frame decimation of some kind (assuming the signal is encoded in a decimation-compatible format such as an SVC encoding). These decisions don't need to be part of the WebRTC component.