Closed germain-gg closed 5 years ago
@MatthewShotton would you mind pitching in with any insights you have here?
@gsouquet on initial inspection I think this is a great progressive enhancement and it sounds like a good direction to take the codebase.
However, I'd like to spend a bit more time digging into it before I can fully answer your question. Realistically it will be a couple of weeks until I'm able to dedicate enough time to do this properly. Hopefully sooner!
Awesome! me and @sacharified have a few more ideas that we'd like to run by you or other VideoContext maintainers.
Do you think you'd have time for a Skype call or if you are based in London we could even meet to have a chat about that? Let me know 👍
I think I have a better understanding of the architecture and the choices made with the worker after I watched the Matthew's JSConf talk https://www.youtube.com/watch?v=GsvAdTyXN8o
Thanks, Germain
A bit of an update on this task,
I have been looking a bit more into the work needed to move the renderer out of the main thread.
Everything would need to happen in a web worker. As you may know it does not implement any of the DOM API. Which means that if you want to send an image to a worker you first have to call createImageBitmap
and then it returns a transferable that you can send through postMessage
.
You can also call createImageBitmap
and a video and it will decode the presented frame that now becomes transferable to the web worker to be painted on a texture.
The main issue is that this call is synchronous and happens in the main thread. Meaning that we fall back in the same set of problems... If the main thread is busy the call to createImageBitmap
will be blocked and our worker will not receive the correct frame to paint...
I haven't seen any work by the W3c working towards fixing that specific issue, but I will keep digging.
I can think of a few ways of to go around that using shared workers and ffmpeg.js but that sounds like something that is out of the scope of video context
Not currently worth doing as the video frames decoding happens on the main thread no matter what
Hi,
Chrome 69 shipped a very interesting API called OffscreenCanvas. Firefox has it too behind a feature flag. The power of this new feature is that you can render a WebGL canvas in a Web Worker.
That would be good to progressively enhance
VideoContext
. It provides a great way to decouple the video rendering engine to the UI. There's a great demo on the Chrome Developers website to highlight the benefits of using that technique.I think I will try to open a PR in a near future to add that functionality.
However, before I start I'd love to hear your opinion about this. Whether you believe I take the right approach in the codebase. And if it's not too much to ask, could you briefly explain to me how the rendering works? And let me know about any quirks that you think would be relevant.
I started digging in the source code and I can see that the renderer is handled by the
UpdateablesManager
which calls the_update
method on everyVideoContext
instance. All of that happens synchronously on the main thread as a result of the_updateRAFTime
call.After digging a bit more I can see some code that offloads the
_updateRAFTime
call in a worker if the tab is not "focused". I do not really understand why it has been built like that and what's the benefit of doing that? Also with the newOffscreenCanvas
we now have access torequestAnimationFrame
which means that we can get rid of the "hacky"setTimeout
.Thank you