Closed karth295 closed 6 years ago
Is it actually less inefficient to send multiple streams? If there are 4 streams, is it not true that a combined stream would be 4x each individual stream?
Probably depends on whether you keep the quality of the original video streams (which you don't need to do).
Regardless, I bet:
I think dynamically adjusting the quality of video streams would be rad!
This would be especially useful for putting a video chat on a public website, where you don't want to stream the high-res version to everyone who just opens the page... and definitely not to google robots... so being able to downsample the video/audio on the server would be great.
I'm kinda uncertain about these reasons though:
So overall, dynamically adjusting the video streams sounds great, but multiplexing doesn't seem that necessary, but maybe it's more efficient, but we'd probably have to test it... we run the risk of implementing something that isn't actually more efficient.
This is called an MCU: https://en.wikipedia.org/wiki/Multipoint_control_unit They use more CPU. https://bloggeek.me/webrtc-multiparty-video-alternatives/ We probably need one with downsampling in order to scale up to 100s of people. https://webrtcglossary.com/mcu/
For 100s of people we should spread the work across multiple machines. We can have clients upload their stream to 1 server, and everybody download a stream (containing multiple videos) from each server.
I wonder if GPUs will be more efficient for all this video processing.
This is more of a long term issue, but it'll be good to keep it in the back of our minds.
Essentially we need a real time version of this: https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
Right now every client downloads every stream separately, which is really inefficient in terms of bandwidth. We should package several (or all) streams together.