w3c / webrtc-encoded-transform

WebRTC Encoded Transform
https://w3c.github.io/webrtc-encoded-transform/
Other
122 stars 27 forks source link

Applicability Statement #32

Open aboba opened 4 years ago

aboba commented 4 years ago

The Insertable Streams API provides access to the RTP payload, which has generated considerable interest. I have heard suggestions that it might be used to implement some of the following:

It might be helpful to have an applicability statement somewhere in the document, to clarify what use cases might not be supportable.

alvestrand commented 4 years ago

It actually doesn't provide access to the full info of the RTP payload; the RTP headers and the segmenting that goes into putting frames into RTP packets isn't reflected in the Insertable Streams API.

Could we take some inspiration from the API proposed for WebTransports?

guest271314 commented 4 years ago

One use case is extending Serializable Frames to support input from a file (.opus; .webm; .mkv, etc.) or existing ReadableStream where the underlying source is one of the codecs supported by WebRTC, without the need for an existing RTCPeerConnection peer, e.g.,

fetch('/path/to/resource')
.then(r => r.body)
.then(rs => {
  rs.pipeTo(senderStreams.writableStream);
  // transfer senderStreams.writableStream to a Worker or other thread 
  // with postMessage(senderStreams.writableStream, [senderStreams.writableStream]);
})

https://discourse.wicg.io/t/proposal-body-mediastream-body-mediastreamtrack/3956.

Consider

parec -v --raw -d alsa_output.pci-0000_00_1b.0.analog-stereo.monitor | opusenc --raw-rate 44100 - - \
    | ffmpeg -y -i - -c:a copy $HOME/localscripts/output.webm

which is read at JavaScript in browser in parallel to the write in "real-time".

Currently the workaround am using to complete this project https://github.com/guest271314/captureSystemAudio/projects/1#column-9323258 is to stream the file to MediaSource to get a MediaStream and MediaStreamTrack with HTMLMediaElement.captureStream().

If there was a way to just pass the ReadableStream of the file to senderStreams.writableStream we can eliminate writing the Opus audio data to WebM container and need for MediaSource altogether and just convert the raw Opus file to a MediaStreamTrack for usage with WebRTC technologies.