Open lm-n opened 4 years ago
This is really exciting and just let me know what you need from me to support this work!
So stoked for these additions!
There is a lot to respond to here but I'll start by adding thoughts on two points:
Should we support WebGL with textOutput() and gridOutput()?
I think WebGL mode would benefit most from 3D sound or something akin to Three.JS' PositionalAudio. In some ways converting a 3D visual experience to an auditory experience is easier than doing the same with a 2D visual experience. This feature might make for a good future project on p5.sound that could then be used for web accessibility.
Should we explore using image recognition to describe complex sketches?
This seems like a giant task with the potential for enormous payoff. I don't think that any pre-existing general image recognition data sets or models would be very effective but I wonder whether it is possible to crowd-source p5 sketch descriptions. In theory, a website could be made that generates a p5 sketch on load and has an input field for volunteers to input a text description of the sketch on the screen. There are obviously a ton of wild design decisions here like "how do we keep the vocabulary in a certain narrowband" etc. This is several large projects in one so it is probably on a more distant time horizon but if we were able to develop a p5 community sketch-to-description dataset, it would be possible to build a really effective (and endearing) natural language description model for p5 sketches.
Both of these things aren't as immediately pressing as questions like "how to indicate when shapes overlap" but I'll leave the hard questions for others π
Thanks @lm-n! I'm adding the Accessibility Stewards @kungfuchicken @cosmicbhejafry to this discussion.
I had an idea on an improvement to textOutput() and gridOutput().
Take this example:
function setup() {
createCanvas(400, 400);
textOutput();
background(220);
square(0, 0, 100);
translate(300, 0);
square(0, 0, 100);
}
On the canvas, I see two squares: one in the top left, the other in the top right. But the accessible outputs describe a single square in the top left.
When determining the area of the canvas, the current transformations are not taken into consideration, so both of these squares are interpreted as being in the top left. Because they are the same size and color, these are assumed to be the same shape.
I'd like to adjust how this works to use the current transformations of the rendering context with getTransform()
and transform a DOMPoint
with the shape's coordinates to get the resulting position on the canvas.
I think this should be fairly simple to implement, and I wanted to share this here in case anyone has feedback.
Hi,
I'd be really interested on contributing to this work if there is space/appetite; I'm crafting an application for the processing fellowship and am really interested in p5.js. I was always put off using it due to (imagined?) inaccessibility so I'm really glad this work is being done and discussed, chased the thread here from p5.accessibility and the docs.
Does more conversation happen on a discord, email thread or any other place?
- Upgrading the tutorial on using p5 with a screen reader
I was wondering if there were any issues capturing possible problems with Mac accessibility that are described briefly in the docs, or if that's been resolved already.
This work has brought up questions about the future of web accessibility that we would like to share with the community [paging @CleezyITP ]:
- What should we do about web accessibility in languages other than English? The functions describe() and describeElement() will support any language but library generated descriptions with textOutput() and gridOutput() as of now will be limited to English language. We believe these features should be accessible in other languages supported by p5.js.
Is the issue you need other libraries/user generated input to provide more lanaguages, or am I misunderstanding?
- What should happen with the sound output of the p5.accessibility add-on?
I'm really interested in this aspect; was user testing ever conducted on this aspect of the library?
- How can we improve the descriptions generated with textOutput() and gridOutput()?
- How do we indicate the position of shapes using other shapes as points of reference? (e.g.: a red ellipse to the right of the pink triangle)
- How should we indicate when shapes cover other shapes?
- Should we support WebGL with textOutput() and gridOutput()?
- How should we expand library generated descriptions?
I think |I need to play with the accessibility features of the library more to appreciate these limitations but I'd love to discuss more about the scenarios that come up for heavy users of the p5.js.
- Should we explore using image recognition to describe complex sketches?
Would be really interested in trying to get a prototype of this working if there was appetite.
Excited babbling aside, would love to chat more about these issues and get more involved here.
I want to work on this issue How can I support?? Please initiate me I will do my best
Hi there!
@kjhollen and I have been working on web accessibility for p5.js during the summer. We want to open up the conversation about what should happen next and where should our community direct its energy.
Our current work includes: [@CleezyITP & @lmccart let's find some time to go over these and some of the questions below. I'll be emailing you soon π !]
4654 describe() and describeElement() functions that allow people to create user-generated descriptions of the canvas and the elements (shapes or groups of shapes) on it. This work will be followed by:
4703 adding 2/3 features of p5.accessibility into p5.js by creating the functions textOutput() and gridOutput() which create library generated canvas descriptions for basic shapes. At first the plan was to update the add-on and prepare it for merging it with p5.js in the near future. However, we realized it was better/more time effective to recreate the functionality of the text output and grid output, formerly called table output, in p5.js than upgrading the add-on that relies on "monkey patching," parsing and interpreting the code. This work will be followed by:
This work has brought up questions about the future of web accessibility that we would like to share with the community [paging @CleezyITP ]:
Other ideas we have talked about include:
These are open questions that we would like to raise as we think about the future of the project! Feel free to share your thoughts and join the conversation!