Open klemay opened 4 years ago
From one of our partners:
I was digging around and found the potential to create a user script in a browser (via TamperMonkey or GreaseMonkey) and found I can add an aria application region around items so that all keys are automatically passed through to the application. I wonder if you put an aria application region around your Hypothesis iFrame if it might give us screen reader users more control over selecting the text we want to annotate.
...might be worth a try!
Notes from a call with our friends at Benetech and an accessibility developer they introduced us to:
content-editable
around the text on the page. From there the user can create text selections that we would have access to and that would be read aloud to the user. (Note that we'd need to adjust the h
, a
, and s
keystrokes to have a modifier so they'd look something like Ctrl
-Shift
-h
because simply pressing the h key will start typing). See Slack for Rob's notes from this call.
Dan and Katelyn met with the founder and general manager of NVAccess - this meeting suggests the accessibility developer we spoke to was pessimistic re: NVAccess' willingness to implement changes on their end. Notes from the call in Slack.
A couple of Slack threads with some recent updates on this:
This thread has been really valuable to my team as we work on making annotating more accessible in Manifold Scholarship. Thanks to everyone involved for your efforts!
@robertknight do you have any more details you can share re: item 1 in your most recent comment, for those of us who aren't in the Hypothes.is Slack? It's exciting to hear that NVDA is working on matching DOM and virtual buffer selection!
I'm not sure of the exact status of work in NVDA, but here are some relevant issues:
The last update on the NVDA PR, from Feb 13th 2023, says:
Blocked by further work on the implementation by Chrome/Firefox
I don't know exactly what that work is.
Thanks, @robertknight! Really appreciate the update and these links.
Overview
When using VoiceOver for Mac to annotate, VoiceOver will read out the text that is being selected, whether that's character-by-character, or word-by-word. You can see this in action here:
https://www.youtube.com/watch?v=AOyVt1w_MUU
I have worked extensively with two users who are blind and experienced with NVDA and JAWS, and they have worked with each other to try and replicate this workflow in NVDA and JAWS with no success. The short version: text selection with NVDA and JAWS happens in an invisible text layer that the Hypothesis client doesn't see. When text is selected, the annotation adder doesn't appear. When NVDA or JAWS users interact directly with the page, they can create text selections but there is no audio feedback
Research and troubleshooting
Here is a summary of our findings thus far:
Helpful documentation
Questions for developers
Additional information
The two users who I have been working with have said they'd be willing to meet with a developer for a screenshare of the current experience, and/or to test out solutions that we may come up with. I can put developers in touch with these two (very generous!) individuals.