Open nuthinking opened 2 years ago
What about splitting the canvas into multiple parts? Could I pass an image to addFrame
instead of the Canvas element?
Adding a full HD frame on my Mac mini takes ~150ms. 5 times slower than real-time. canvas.toDataURL
takes ~62ms. I guess it will be hard to perform the splitting logic and pass it to more webworkers in much less than ~90ms.
Update: splitting logic can happen in the webworker, if passing the whole frame doesn't slow down things.
If someone wants to investigate the performance of encoding multiple frames in parallel in WebWorkers that'd be grand. I suspect the encoding work will actually be sent to Chrome's main renderer thread and so it'll be serialised and won't be any faster.
What I had in mind, for simplicity, is to split the canvas into 4 parts. Passing the whole canvas frame to all workers, each one will take its part and will generate the video for its region then ffmpeg will stitch them together once done. It's to be proven that the stitch work won't leave visible lines.
What about using toBlob
instead of toDataURL
? From my searching it looks to be much faster since it doesn't incur a serialization overhead (but does require the API to be async instead of synchronous).
Any updates on performance? WebWorkers would be a nice option. Mostly to not block the main-loop,
@c4b4d4 check out webm-wasm. Uses web workers
@akre54 I ended up using this method https://webrtc.github.io/samples/src/content/capture/canvas-record as it records a canvas element in real-time seamlessly.
Thanks for that webm-wasm. I'll compare it to see what fits better for my needs :-)
Yeah if you need to record in realtime then the MediaRecorder API is the way to go. For frame-by-frame capture I'd check out webm-wasm
webm-wasm:
😱
It doesn't seem to use multi-threading, for instance. Regardless, you shouldn't use in production an abandoned project.
If it works who cares if it's abandoned? Most of these APIs are pretty stable and not everything needs to be in a constant state of churn.
wasm has only recently supported multithreading, and though I'm sure that will help see some speedups if you were to add the flag to webm-wasm, that's not the major bottleneck that's causing slowdowns in this library. Feel free to profile for yourself.
Based on some tests, the recording only uses 1 CPU core. Is there a way to use more cores? Maybe #39 is the solution? To split at least the work in 2 cores and maybe double the performance? Recording 15 seconds full HD video on an Apple M1 requires 50 seconds. It would be great to utilize the machine hardware at its fullest and reduce dramatically the time required. Thanks!