Open thejohnjansen opened 4 years ago
Going back to late 2019 I don't see any regression here: https://wpt.fyi/results/intersection-observer?diff&filter=ADC&run_id=410000005&run_id=623520006
So this likely always happened. Are you able to reproduce via wpt run
? E.g. on Windows:
python wpt run --log-mach=- --channel=dev edgechromium [test name]
See also https://web-platform-tests.org/running-tests/from-local-system.html#windows-setup for the basic Windows setup.
@thejohnjansen @mustjab do you know if the Edge setup we have is running headless or "headfull"? That's a difference that one might expect to lead to this.
@foolip great callout! I have been trying to repro and not seeing issues, so I will try headless.
@foolip @mustjab it looks like this might be linux vs windows, not headless vs headfull. When I run the tests manually, they all pass every time. When I run them automated via webdriver, they fail. Even if I try to refresh the page automatically, they still fail. If I refresh the page manually, they Pass. I'm going to play around with this now.
I know you didn't intend to, but thank you for reminding me that Chrome runs on Linux and Edge runs on Windows. :-)
I made a patch for this, but it's not pretty. I added a few setTimeout() functions to the intersection-observer/resources/intersection-observer-test-utils.js file in the function checkLastEntry().
Is there a recommended way to slow down a test like this without using a bunch of setTimeouts()?
Of course, these tests seem kind of hacky already given a comment in that file:
// Ideally, it should be sufficient to use requestAnimationFrame followed // by two step_timeouts, with the first step_timeout firing in between the // requestAnimationFrame handler and the task to deliver notifications. // However, the precise timing of requestAnimationFrame, the generation of // a new display frame (when IntersectionObserver notifications are // generated), and the delivery of these events varies between engines, making // this tricky to test in a non-flaky way.
I'm pretty firmly against adding setTimeout's for this. We went through a pretty strenuous process to make sure these tests are deterministic and as exacting as possible about the timing of IntersectionObserver notifications. Adding a setTimeout fudge factor just papers over legitimate web compatibility issues, and it also makes the tests inherently slow (which is something we try hard to avoid). I think it's important to understand the root cause of the failures.
I'm not sure exactly where is the best place to log this kind of an issue, but since it looks like it might be an infra issue, starting here.
I noticed that for the Intersection Observer test results, it appears that Edge is failing a significant number of tests. However, when I run them manually, Edge passes those tests. For example, the table says we pass 2/5 Display-None tests, but when I run the test file, I see we pass all of the tests.
I'm not exactly sure how best to debug something like this. I'm happy to do so, but I typically debug valid failures, not incorrect reporting.