Hi.
This is my first time posting something on github in an issue. I'll be glad to receive any feedback positive or not addressing any tone / appropriateness problem in what I'd have written.
I am a beginner developer mostly learning on my own on the web here and there.
I am quite a crossplatform user, having mac, windows and having tried or currently trying other os liked chromeos (Flex), debian/ubuntu mate with Orca in the past, etc... And ios is my main and only phone.
I am also a college student actively doing math.
As a fully blind user of all these platform, I had the opportunity to see the differences in how mathml content is handelled, or not at all, by various screen readers on various platform.
Let me first express all my gratitude to everyone involved in any way in the development and maintaning of mathcat and other similar project, as a student I couldn't do much without such interfaces behind developers who only want our best.
Anyways. Voiceover on mac in particulare has a lot of accessibility problems and annoyances here and there, and so does chromevox although it's much better than VO for web navigation.
Lately I've been having a very irritating bug with math content and and voiceover. Although it ended up being much less drammatic, it only adds up to a too long pile of unaddressed bugs with this particulare screen reader.
https://www.applevis.com/forum/macos-mac-apps/stem-students-beware-voiceover-broke-navigation-mathml-content
Moreover, although voiceover does support mathml content out of the box, its implementation is quite terrible: it's common to have to interact more than 5-7 times deep into the expression before going left or right, and it took apple ages to add support for numeth braille code on mac. Ios voiceover is not as good as its macos counterpart for mathml tables... For chromevox, it can barely detect numbers from mathml and nothing else.
So I had an idea, why not just create a chrome extension/app and feed the screen reader with the spoken content in aria? At least speech should be fairly easy to implement, though I have 0 idea if braille would even be possible on some srs/oses (edit: I reconsider it and in fact I'd just have to use mathcat's own copy of liblouis and convert whatever braille code chosen by the user in its unicode braille quivalent in a zone dedicated to braille very similar to the one present in mathcat web demo)? Especially with chromevox whose exposition to the web is the most frequent, this could help solving so much problem and attract much more blind users into the chromebook market which has quite a robust screen reader for anything web outside windows and maybe orca on linux - mac voiceover is terrible with web.
As I said I am quite a junior, but this seems to be a realistic project for the level of my knowledge at the moment. However I know nothing about wasm and I am now lost in how to actually get the feed from mathcat to be transfered to the screen reader through various implementations of aria announcements more or less agressive.
What I will probably ahve to do can be summarize as follow: through chrome extention create an empty html region only visible for screen reader at the botum of the page, in a role=application or similar attribute to force all screen readers to be in their equivalent of application/form mode, probably add a keyboard shortcut to go there. Then the extension would gather the mathml/mathjax content under the cursor before it moves to the new location. I thought having to do the keyboard navigation schema myself but mathcat already does that as well. At the end after the user's finished reading the math element it can press escape or similar and the cursor is sent back to its previous location, and so on for any math embed on a given page.
If this project is realistic, I would like to have minimal instructions to understand the code so I can at least try to do something that will be very useful to the blind community.
Moreover, depending on how robust orca's wai-aria implementation is this could potentially solve two problems at once.
Thanks a lot!
And my deepest gratitude again for all the great work you all have been doing for us for such a long time.
Hi. This is my first time posting something on github in an issue. I'll be glad to receive any feedback positive or not addressing any tone / appropriateness problem in what I'd have written. I am a beginner developer mostly learning on my own on the web here and there. I am quite a crossplatform user, having mac, windows and having tried or currently trying other os liked chromeos (Flex), debian/ubuntu mate with Orca in the past, etc... And ios is my main and only phone. I am also a college student actively doing math. As a fully blind user of all these platform, I had the opportunity to see the differences in how mathml content is handelled, or not at all, by various screen readers on various platform. Let me first express all my gratitude to everyone involved in any way in the development and maintaning of mathcat and other similar project, as a student I couldn't do much without such interfaces behind developers who only want our best. Anyways. Voiceover on mac in particulare has a lot of accessibility problems and annoyances here and there, and so does chromevox although it's much better than VO for web navigation. Lately I've been having a very irritating bug with math content and and voiceover. Although it ended up being much less drammatic, it only adds up to a too long pile of unaddressed bugs with this particulare screen reader. https://www.applevis.com/forum/macos-mac-apps/stem-students-beware-voiceover-broke-navigation-mathml-content Moreover, although voiceover does support mathml content out of the box, its implementation is quite terrible: it's common to have to interact more than 5-7 times deep into the expression before going left or right, and it took apple ages to add support for numeth braille code on mac. Ios voiceover is not as good as its macos counterpart for mathml tables... For chromevox, it can barely detect numbers from mathml and nothing else. So I had an idea, why not just create a chrome extension/app and feed the screen reader with the spoken content in aria? At least speech should be fairly easy to implement, though I have 0 idea if braille would even be possible on some srs/oses (edit: I reconsider it and in fact I'd just have to use mathcat's own copy of liblouis and convert whatever braille code chosen by the user in its unicode braille quivalent in a zone dedicated to braille very similar to the one present in mathcat web demo)? Especially with chromevox whose exposition to the web is the most frequent, this could help solving so much problem and attract much more blind users into the chromebook market which has quite a robust screen reader for anything web outside windows and maybe orca on linux - mac voiceover is terrible with web. As I said I am quite a junior, but this seems to be a realistic project for the level of my knowledge at the moment. However I know nothing about wasm and I am now lost in how to actually get the feed from mathcat to be transfered to the screen reader through various implementations of aria announcements more or less agressive. What I will probably ahve to do can be summarize as follow: through chrome extention create an empty html region only visible for screen reader at the botum of the page, in a role=application or similar attribute to force all screen readers to be in their equivalent of application/form mode, probably add a keyboard shortcut to go there. Then the extension would gather the mathml/mathjax content under the cursor before it moves to the new location. I thought having to do the keyboard navigation schema myself but mathcat already does that as well. At the end after the user's finished reading the math element it can press escape or similar and the cursor is sent back to its previous location, and so on for any math embed on a given page. If this project is realistic, I would like to have minimal instructions to understand the code so I can at least try to do something that will be very useful to the blind community. Moreover, depending on how robust orca's wai-aria implementation is this could potentially solve two problems at once. Thanks a lot! And my deepest gratitude again for all the great work you all have been doing for us for such a long time.