Closed neuronsupport closed 6 years ago
No, each individual application will have it's raw content live encoded on the server to a h264 frame, after which it's send using it's own dedicated webrtc datachannel, which is configured to use UDP. On reception in the browser, the h264 frame is decoded using a wasm h264 decoder, and drawn in it's own dedicated application canvas. As such the entire 'composited' image you see in the browser is created in the browser itself using ordinary dom elements.
I see so it is working similar to VNC but each application has its HTML5 canvas element instead of a grand canvas drawing all.
Hi, I was wondering how you utilize WebRTC/ORTC? are you streaming composited screen image to browser thru WebRTC/ORTC?