I've basically just removed the HTML code and packaged it up into a simple library, but I've not been able to work out how to detect if all frames have been "emitted" - i.e. the whole decoding process has completed. I'm assuming there's an event listener that I should be adding somewhere, but I'm not sure.
I assumed that I could simply await the response.body.pipeTo as shown in the snippet below (link to the actual code), but that doesn't work - the pipe completes before even the first frame is read.
// Configure an MP4Box File for demuxing.
this.#file = MP4Box.createFile();
this.#file.onError = error => setStatus("demux", error);
this.#file.onReady = this.#onReady.bind(this);
this.#file.onSamples = this.#onSamples.bind(this);
// Fetch the file and pipe the data through.
const fileSink = new MP4FileSink(this.#file, setStatus);
fetch(uri).then(async (response) => {
// highWaterMark should be large enough for smooth streaming, but lower is
// better for memory usage.
await response.body.pipeTo(new WritableStream(fileSink, {highWaterMark: 2}));
if(this.#onFinish) this.#onFinish();
});
So I'm wondering if a pro here could offer any hints? I'd be happy to update the video-decode-display example with a pull request based on the hints if that is a good idea - it seems like something that would be commonly needed.
Solved! In an earlier attempt I was firing my onFinish after the onSamples loop - and that wasn't working, but I just realised that I had to await decoder.flush(). Here's the fix:
I'm trying to put together a simple little utility to split an mp4 video into frames, based on this demo:
I've basically just removed the HTML code and packaged it up into a simple library, but I've not been able to work out how to detect if all frames have been "emitted" - i.e. the whole decoding process has completed. I'm assuming there's an event listener that I should be adding somewhere, but I'm not sure.
Here's the code: https://github.com/josephrocca/getVideoFrames.js
I assumed that I could simply await the
response.body.pipeTo
as shown in the snippet below (link to the actual code), but that doesn't work - the pipe completes before even the first frame is read.So I'm wondering if a pro here could offer any hints? I'd be happy to update the
video-decode-display
example with a pull request based on the hints if that is a good idea - it seems like something that would be commonly needed.Thanks!