Open mourner opened 3 years ago
I guess this is limited by minih264
only implementing the barebones baseline
profile and with some limitations, while ffmpeg
uses the full encoder with high
profile and all the advanced compression heuristics, so there's nothing we can do here other than use another library which we translate to a much bigger bundle size.
Yeah, I imagine we won’t be able to get as low as ffmpeg, but 23 times larger seems a bit much. Have you also tried using different values for things like speed, groupOfPictures, fragmentation, etc?
Maybe worth opening an issue on minih264 repo, ideally if we had a test video showing the filesize margin that might help.
Another thing - it might be possible to swap out minih264 for x264. I imagine the bundle size will be much bigger though (but still way smaller than ffmpeg wasm).
@mattdesl I'll play with parameters to see if it affects anything. A bigger x264
bundle sounds awesome — it's used under the hood in ffmpeg
, and supports all the cool optimizations. The only problem is that it's GPL-licensed, which means this library would have to be too?
speed
does affect compression significantly — switching from 10
to 0
makes the size 2.25x smaller, while taking 2x longer.
temporalDenoise
makes it 4x longer to encode but brings the size down to 2.7x of original.
So, summing up, while playing with parameters helps somewhat, users still need to run exported videos through ffmpeg
locally to make them usable for sharing. A bigger x264
would allow implementing video export in various visualization apps in a self-contained way, without the need for local reencoding, although licensing implications are unclear to me.
Another thing you can try in the current build is kbps
- this will change from constant quantization to a variable rate. For example, using 6000 (that's in kilobits per second) or something. The problem is that my C++ hardcodes the min/max quantization, which I've been finding are pretty crucial for finer control over the size/quality balance, and also a lot of this will depend on the type of content you're encoding and what you plan to use it for. By tweaking this I'm able to go from ~11MB 1 second mp4 to ~1MB or so without much quality/speed difference.
I'm going to look into these settings more, and also expose the qMin/qMax parameters. There's also some additional configurations in minih264
that aren't exposed yet, and they might be able to help.
Latest version (1.0.7) includes { qpMin, qpMax, vbvSize }
settings, and some more detailed documentation in the readme about file size vs quality trade-off. Hopefully you can find a mix of settings that will work for you without too much of a performance sacrifice.
@mattdesl one last issue I encountered and was puzzled by in both this project and its predecessor — the resulting videos I produced (1920x1080, 60fps) are either an order of magnitude bigger or of abysmal quality (depending on
quantizationParameter
) compared to established encoders such as those used byffmpeg
, and I'm wondering if that's an inherent flaw ofminih264
(even though it's readme doesn't show as bad of a difference), or is there anything else at play?And if this can't be addressed in the library, could this be documented with some notes on video quality and perhaps recommendation to recompress the videos locally afterwards?
Currently I've settled on using
quantizationParameter: 20
(a compromise between encoding performance and good quality — 10 is 1.5x slower, and 30 produces noticeable artifacts), and then simply runningffmpeg -i video.mp4 video-optimized.mp4
— the default encoding parameters offfmpeg
produce result that's indistinguishable visually but 23 times smaller.