Closed bbertucc closed 1 year ago
After Gen reported a need for this, a CEO literally told me he needed this. I imagine we would just need to create a public URL of report. There wouldn't be editable filters for the public URL.
Useful metrics to include wiithin this:
Most folks just want to know are we reducing our risk over time.
It is also useful to be able to compare multiple departments and know that engineering needs more work on accessibility than the arts department (or whatever).
Adding a screenshot @mgifford built with Purple Hats. Additional options to export as CSV and PDF.
Just for clarity, that screenshot in the comment above from CivicActions' version of Purple Hats.
The latest version of the report from Singapore Digital Services still looks like this:
Still useful, but much simpler.
Just for clarity, that screenshot in the comment above from CivicActions' version of Purple Hats.
Thanks @mgifford! I can see how the grade helps. Tagging #76 on here, where we talk about some sort of grade or score.
Merging #152 with this issue. @joelhsmith wrote:
Could the administrator have the ability to share individual Reports publicly (no login) with an obfuscated url? This will allow people to collaborate without needing multiple logins https://github.com/bbertucc/equalify/issues/86 . The institutional requirement of SSO is sidestepped for some non-sensitive use cases until https://github.com/bbertucc/equalify/issues/86 could happen.
Maybe we have multiple layout options when creating a report?
We probably want also to list the tests passed. @mgifford noted:
the number of passed tests can tell us something about the complexity of a site. If we know that there are 400 passed test & only 1 error, we probably can be more confident about their accessibility than say a page that just has 200 passed tests and the same error. The number of right answers is actually an indication of the work done.
I know OCR particularly appreciates references to work done.
Acrobat publishes tests passed in their report:
Another important point from @mgifford:
these tests won't tell you if a document is accessible. It will tell you if a document isn't accessible.
How can we clearly articulate this?
Another +1 for this is to provide a more interesting screenshot for presentations - the "wall of errors" we currently have isn't very .. err.. visually interesting..
When completing this we should probably create a register_report_template()
function so that other integrations could extend templates.
I would also suggest adopting any new templates as an integration before we work them into core code. That way we can see if people are truly interested by the template before it hits core.
The most interesting about reporting the "passed" tests is that you can begin to aggregate how complex the code is that is being used to deliver the site. This is something that can cut two ways, both:
Frankly, both are important. Knowing that a 100 page site has 100,000 testable elements, vs another 100 page site that has a million will matter.
Maybe that complexity is needed. Likely it isn't. The same approach to counting axe bugs could be applied to other web elements.
Closing this issue in favor of #204, which includes lessons from this ticket and provides a more concise description of the issue.
Gen Herres notes that CEO-ready reports are essential to prove work on accessibility. Charts are key.