Open owenatgov opened 1 week ago
Name | Link |
---|---|
Latest commit | ab0db31a3617e604dde16357fa751b92d67f9edf |
Latest deploy log | https://app.netlify.com/sites/govuk-design-system-preview/deploys/673ca41063b917000839878f |
Deploy Preview | https://deploy-preview-4290--govuk-design-system-preview.netlify.app |
Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
Here's my thinking for showing the results:
I ran accessibility testing against this PR and the 3 other PRs with different methods to visually and programatically highlight the pages and sections in metadata:
My general analysis of these findings: None of these 3 methods really make a difference to identification of sections and pages specifically for those words in screen reader audio. Quotes start imacting readout, notably for JAWS and Talkback, but not for any other screen readers.
From off-github chats, we're starting to lean towards not having any extra indiciation as the content of the readout provides context. On the other hand it gives us some freedom to experiment since at least these techniques don't impact screen readers.
Just a draft at the moment. Full description coming soon