@jgraham Yes, updates on a timer. So far only two aligned runs in April. Need to see what's going on.
@past We have an action item to tackle the frontend work. Will be done in a few months as it is a lower priority. Hopefully not a blocker. Also others can look into doing the frontend work if needed
@jgraham No, not a blocker. Next priority is to produce a list of tests that fail
@jgraham A nice to have would be a spreadsheet that is generated. Something for mobile difference triage.
@past: Since we only have two aligned runs, there are probably some inconsistencies that we should look into. But Sam's work will probably yield some better results closer to desktop
@jgraham Had an internal dashboard for browser specific failures. It was to see differences between Firefox mobile and Firefox desktop. Most are like CSS print test not working on Android. Once we take a look, a lot of this stuff will be easy to explain
@past Only ~20k subtest missing between Chrome Android vs Chrome iOS - shows there is a small amount of missing tests
@gsnedders We don't have massive diff in our own CI. JS tests have much less. The simulator has more in the differences than actual MacOS or iOS. example: WebPush did not work in the simulator. So there are things in the simulator that are not representative of things in iOS
@past Any estimation on scale
@gsnedders Not really. Hard way to query
@past We could provide our current work as a first step. Can say we have less results than we would like but at least there's progress and we can always improve.