sitespeedio / browsertime

Measure and Optimize Web Performance
https://www.sitespeed.io/documentation/browsertime/
Apache License 2.0
600 stars 137 forks source link

Add support to visualmetrics to identify key frames matching the given colors #2119

Closed aosmond closed 2 months ago

aosmond commented 3 months ago

This patch updates the visualmetrics.py and visualmetrics-portable.py scripts to allow it to identify and record timestamps for frames that match the given color configuration. This consists of an RGB value with fuzz (anything +/- fuzz matches) and a fraction between 0 and 1 of the required percentage of each channel from the histogram that must match it.

This is intended to be used in Firefox automation to collect metrics with respect to video playback latency.

gmierz commented 3 months ago

Hi @aosmond, interesting metric! Can you provide an example video, and command that we could test this with along with the expected output from it?

aosmond commented 3 months ago

@gmierz I added an example zip/description here: https://gist.github.com/aosmond/4daa941cf36670e2ab0f7ecbeb66ed00

This should yield in browsertime.json under "visualmetrics" when the poster, first video frame, second video frame and last video frame are displayed.

aosmond commented 3 months ago

Updated according to your feedback @gmierz. You can see this working in conjunction with Firefox CI in my try pushes which I am just polishing up now: https://treeherder.mozilla.org/jobs?repo=try&revision=2bbf558778ffbf707ccfed5a49d24ecc7a0e8e59

I also updated the visualmetrics scripts to accept an individual range for each channel, as well as annotating the video with the key color frame information (if available).

aosmond commented 3 months ago

The mozilla CI side is tracked in https://bugzilla.mozilla.org/show_bug.cgi?id=1896524

aosmond commented 2 months ago

I'm wondering could this just be a change in visualmetrics-portable.py and then add a generic way to pass on parameters to that script from Browsertime, and then you could have your magic analysing it in your ci?

I'm happy to make whatever changes are necessary for a merge :). But I'm a bit uncertain as to what that would look like, as I didn't make too many changes outside the python scripts, just what I thought was necessary to get the information out and annotate the videos. What parts would you like removed?

I already do a post processing phase on the raw key color data in our CI to normalize things in case it didn't get a first frame, but found a second frame, etc.