Closed guest271314 closed 5 years ago
It looks like in non-variable resolution mode, your changes to addFrame will cause the header to be re-written to the file for every frame.
@thenickdude Is there still an issue with the code in the PR?
Merged in 87bcb1e6cc61acd3de825f5ae9a9c5ff378224b4
Getting Bad EBML datatype undefined
error from writeEBML
.
Fixed by setting width
and height
parameters and using OR
this.addFrame = function(canvas, overrideFrameDuration, width, height) {
if (!writtenHeader) {
videoWidth = width || canvas.width;
videoHeight = height || canvas.height;
writeHeader();
writtenHeader = true;
}
for the case where canvas
is a data URL without width
, height
properties, pass width
, height
as last two arguments to the function call.
Instead of buffering frames as strings, call addFrame() immediately with the canvas you already have.
I'm not changing the API like that because setting the sole track's width/height to any specific value for a multiresolution video is meaningless anyway. I'll merge "|| 0" though.
"|| 0" outputs the expected result.
Instead of buffering frames as strings, call addFrame() immediately with the canvas you already have.
How can a frame duration be derived when the next frame resolution and number of frames having the same resolution as the current frame is unknown?
Do you actually have a real application where you sample frames using setTimeout, such that you don't know how many frames you'll get out of the video? They will be sampled at irregular intervals and the resulting video will be completely garbage.
Have you run the code at the linked plnkr? Is possible to know the exact number of frames that canvas can draw in a given time span given N arbitrary source files where the requirement is to encode the images to reflect the frame rate of the individual source file?
At the linked code N images, whether Promise
and setTimeout()
are used or not (primarily to decrease file size), the number all images of the same resolution from the same source file is used in conjunction with the span of time the original source file played at HTML <video>
element to determine the frame duration, else, the larger images (potentially summing to less frames than smaller series of same resolution images) will playback at the same rate as the smaller images, resulting in visually faster playback rate.
The output is the expected result.
If you already know the duration of the original source file, and you know your sampling rate (30 FPS), just divide one by the other and that's your duration.
Sampling frames from a video that is currently playing using setTimeout will never give the same result twice, because the interval between timer firing will drift according to how busy the browser is. So this approach is basically never useful.
It is not that simple as explained at the previous comment. When your unmodified code is run without setting individual frame duration, given input media fragments from various media sources, since the number of larger resolution frames can be less than the preceding or following series of frames the playback will be faster for the larger frames and slower for smaller frames by default, a minimal example https://plnkr.co/edit/4JxS4O?p=info. If you know how to adjust your original code to not encode the frames as descibed above, kindly fork the plnkr with the code to demonstrate.
Compare the output to https://plnkr.co/edit/Inb676?p=preview. Why searched for WebM writer written in JavaScript and found your work in the first place https://plnkr.co/edit/wyoEIj, https://plnkr.co/edit/n1rGNe.
What are you actually trying to achieve, to concatenate a bunch of clips from existing webm videos and preserve their running time?
Do you actually have a real application where you sample frames using setTimeout, such that you don't know how many frames you'll get out of the video? They will be sampled at irregular intervals and the resulting video will be completely garbage.
ReadableStream
and where supported WritableStream
are used instead of setTimeout
and requestAnimationFrame
. There is not guarantee as to intervals. From own experiementation VP8 from recording to recording will be different, unlike H264 or AVC1.
What are you actually trying to achieve, to concatenate a bunch of clips from existing webm videos and preserve their running time?
Yes. That is what the code at the branches at MediaFragmentRecorder and https://github.com/guest271314/native-messaging-mkvmerge attempt to achieve.
Began using MediaSource
(master branch) where several issues prevented the expected output at Chromium, one is reproduced at the plnkr which crashes the browser. It took some time to find at least one verified source of the crash. Using this branch https://github.com/guest271314/MediaFragmentRecorder/tree/imagecapture-audiocontext-readablestream-writablestream MediaRecorder
implementation at Chromium finally encoded variable resolution, though does not not playback the input resolution at Chromium implementation of HTML <video>
element, yet does at Mozilla Firefox and Nightly. The issues at Chromium MediaRecorder
, specifically captureStream()
where resolution changes are too lengthy to list here. The remainding 9 branches are dedicated to attempting to write code which outputs the same result at Chromium and Firefox. Have only been able to record media fragments at Chromium which output variable resolution at <video>
using H264 or AVC1 codec in Matroksa or WebM container, which Mozilla does not currently playback.
Thus, your work has proven that the requirement is possible, and that the issue at Chromium is by design, as much has been disclosed in bugs, after several filing issues.
Not only existing WebM clips. The media fragments can be created dynamically, as a specific media fragment might only be audio or video, where if using mkvmerge
the corresponding track needs to be created to append the individual tracks.
MediaRecorder
has the issue of stopping recording when initial source (from captureStream()
) changes, in general, see https://github.com/w3c/mediacapture-record/issues/147. That still will not solve the issue of Chromium not rendering WebM video written by the same browser at the browsers' own implementation of HTML <video>
element. Am still not sure why variable resolution video recorded at Chromium using VP8 does not playback the input resolution. The <video>
will playback variable resolution frames recorded at Mozilla, which does not have the issues of crashing when recording the variable resolution input tracks.
Now am attempting to incorporate audio into your work, to be able to write the file precisely how it is intended to be encoded and played back, without trying to workaround source code authors' intent to not render variable resolution video at Chromium https://github.com/kbumsik/opus-media-recorder/issues/23 where once completed a stable Matroska and WebM writer (or, recorder) will be able to output the same result at Chromium, Chrome and Mozilla browsers, without needing to work around decisions made by source code authors as to how HTML <video>
and MediaRecorder
are implemented.
This will help re specification and potentially uniform implementation https://github.com/w3c/mediacapture-record/issues/167 though that still leaves the issue of HTML <video>
at Chromium https://bugs.chromium.org/p/chromium/issues/detail?id=983777 (and whatever parameters might be passed to the encoder to prevent display of variable resolution frames - besides H264 (openh264 or avc1) https://next.plnkr.co/edit/Axkb8s). Therefore writing the video to avoid trying to find a bug that is an implementation decision lead to this repository.
Close, though the resulting video should not be 43.5 seconds.
Are you sure that the source clips play for exactly the duration you're requesting? I think it's likely that the start and end point get shifted to keyframes.
@thenickdude Have run the same media fragments thousands of times at this point. You might get close to 42 seconds on some of the branches. Not 43.
To verify you can add the total duration of the played media, then subtract that value from 43.5. A rudimentary algorithm for calculating duration is included at master branch, to confirm that MediaSource.duration
when endOfStream()
is called is close to the total duration of the N recorded input videos recorded by MediaRecorder
from playback at HTML <video>
element.
If you simply add the media fragment URIs the sum will not be 43.5.
Using video.played.end(0) - video.played.start(0)
or video.currentTime - from
where from
is the start of the media fragment URI and summing the outputs the duration is 41.231525
.
Using MediaSource
"sequence"
mode with codec set to VP8 MediaSource.duration
at endOfStream()
is 41.208
.
The total duration can output close to 42 over the course of dozens or hundreds of runs, meticulously checking each time, though should not reach 43.
You hope to capture about 1200 frames, right? (40 seconds x 30 FPS). A timing error of 1 millisecond per frame could amount to 1.2 seconds of error over the whole video if the rounding is biased, and 1ms is the precision of Date.now().
Looks like video.currentTime does much better as a clocksource:
That output certainly resembles the expected result.
The total number of frames to be captured depends on the expected file size and quality of the video; e.g., the maximum possible or 23 or 24 FPS.
Will continue to attempt to include the capability to write audio to the file as well using your work. Do you have any suggestion as to where to begin?
Should another PR be filed or do you intend to include || 0
?
Oh, I've merged || 0 already in master, I'll push a release for it.
I don't have any plans to work on adding audio, sorry, and I'm not sure where best to begin either (it probably depends on what format you can capture the audio in and what environment you expect to run in, Chrome, arbitrary browser, Electron, etc).
Did you write the code from scratch?
Yep
Are you aware of https://github.com/tseylerd/Webm-writer/blob/master/src/writer.js?
Nope
Fixes https://github.com/thenickdude/webm-writer-js/issues/6
Add support for
plnkr https://plnkr.co/edit/Inb676?p=preview