Open lemnis opened 2 years ago
A small round up of my own personal thoughts:
@lemnis Excellent idea and proof of concept! I've explored this in the past and agree that automating the testing is essential to keeping results fresh. It's also not a great fit for the current test format, a lot of work to set up (initially), and in my experience, a lot of the screen reader testing packages are not robust enough to accomplish the goal reliably.
I'd like to point you in the direction of ARIA-AT, specifically the automation workstream. I'm a co-chair of that group, and our goal is to approach screen reader testing in a more standard way, with the involvement of AT developers - and automate all of it. It's a project that will take quite a lot of effort and time, but I'm hopeful that it will eventually make a11ysupport.io obsolete. Might you be interested in helping?
I would keep an eye out on the ARIA-AT and happy to help where I can.
FYI, I also have been playing a bit with playwright, as all browser developer tools contain info about accessibility. For which tools are currently out to get exposed a11y data, it feels the most stable and reliable. It can be an interesting data point before it gets sent to the AT. Bugs reported improving playwright https://github.com/microsoft/playwright/issues/14332 & https://github.com/microsoft/playwright/issues/14347).
Seeing that multiple packages are available to automate screen reader testing, wouldn't it be great to start supporting in combination with manual tests? (e.g. @guidepup/guidepup, @accesslint/VoiceOver)
Me trying stuff out:
Output
https://user-images.githubusercontent.com/3999815/170102579-e2737535-1cd7-4d91-bdfd-77ad11530cd1.mov
What are thoughts? How valuable do you think it would be and which things should we take into account?