Closed mcomella closed 3 years ago
The macrobenchmark library came out in alpha so I'm trying to get it working: https://github.com/mozilla-mobile/fenix/issues/19576 I've been unsuccessful so it probably won't be part of this analysis.
Since mozperftest VIEW seems noisy, I filed an issue to land a similar measurement without perftest: https://github.com/mozilla-mobile/perf-frontend-issues/issues/208
I completed my report: https://docs.google.com/document/d/1QKHK50VJisT2k5kTfbNwDb5MvKKTi5H9xAKR9s-Hzu8/edit#
Here are my conclusions (copy-pasted):
./measure_start_up.py … cold_view_nav_start
, 25 iterations: change must be >= ~50ms to be observed in the median (27ms observed range in median + buffer time)I'm going to close this as complete.
I implemented an intentional 100ms regression (thread.sleep) in mozperftest and measured an improvement. This could be true (maybe it wasn't in the critical path) but it seems more likely to be noise: let's investigate how much noise there is in perftest and what scale of change we're able to detect.