Open jugglinmike opened 5 years ago
You're right. We can't make assertions regarding timer accuracy since those vary greatly between engines and devices. For testing purposes, we would force the server to give slow answers and check that the time difference is roughly reflected in the timeline. I believe that what the test attempts to do somehow but I couldn't pinpoint how it's achieving it with dummy.txt. Maybe @yoavweiss has a better insight?
The test makes assertions in a couple files, but the flaky one is the simplest since it involves only Promise
and setTimeout
:
promise_test(function(test) {
var durationMsec = 100;
// There are limits to our accuracy here. Timers may fire up to a
// millisecond early due to platform-dependent rounding. In addition
// the performance API introduces some rounding as well to prevent
// timing attacks.
var accuracy = 1.5;
return new Promise(function(resolve) {
performance.mark('startMark');
setTimeout(resolve, durationMsec);
}).then(function() {
performance.mark('endMark');
performance.measure('measure', 'startMark', 'endMark');
var startMark = performance.getEntriesByName('startMark')[0];
var endMark = performance.getEntriesByName('endMark')[0];
var measure = performance.getEntriesByType('measure')[0];
assert_equals(measure.startTime, startMark.startTime);
assert_approx_equals(endMark.startTime - startMark.startTime,
measure.duration, 0.001);
assert_greater_than(measure.duration, durationMsec - accuracy);
assert_equals(performance.getEntriesByType('mark').length, 2);
assert_equals(performance.getEntriesByType('measure').length, 1);
performance.clearMarks('startMark');
performance.clearMeasures('measure');
assert_equals(performance.getEntriesByType('mark').length, 1);
assert_equals(performance.getEntriesByType('measure').length, 0);
});
}, 'User Timing');
This affected https://github.com/web-platform-tests/wpt/pull/14479.
The test
service-workers/service-worker/performance-timeline.https.html
produces inconsistent results in Firefox. This can be observed locally via the following command:Relevant output:
The instability is also apparent in the logs reported by TravisCI when validating patches which modify that test (e.g. Travis CI job 431738430 from gh-13164).
The test attempts to account for non-determinism via an arbitrary "accuracy" requirement. As far as I can tell, the tests written specifically for the Performance Timelne avoid asserting any expectations about durations in absolute terms. @plehegar /@foolip is that right? Would you recommend as much for this test?