Quansight-Labs / czi-bokeh-mgmt

MIT License
2 stars 0 forks source link

TASK - Scope and plan accessibility audit - what, how #8

Open trallard opened 1 month ago

trallard commented 1 month ago

Before starting any hands-on audits, we need to align on the following:

🙋🏽 Who: @trallard @frankelavsky @mattpap

trallard commented 1 month ago

As promised, here are my initial thoughts on scoping the audit.

Since Bokeh plots can be used in multiple contexts like dashboards, Jupyter notebooks, and websites, I'd suggest prioritising standard reusable components regardless of the final context in which the plots are presented and that are provided by default/natively by Bokeh.

Including:

At a later stage and depending on findings, we can expand to other items like:

Separately, someone in our design team could look into:

On the how we can reuse some of the examples in the Bokeh documentation - @frankelavsky what works best here for you in general, have one single place with all the things you need to audit for findability? In this case, we could create a small webpage with only relevant examples.

Pinging @frankelavsky @mattpap for ideas, thoughts, questions on the above

trallard commented 1 month ago

As for accessibility prior work, I think the most reliable source right now is the issue tracker: https://github.com/bokeh/bokeh/issues?page=2&q=sort:updated-desc+accessibility

Documentation and discussions:

frankelavsky commented 1 month ago

what works best here for you in general, have one single place with all the things you need to audit for findability?

That would be awesome, but not if it slows us down too much. Building out a page wouldn't just be good for finding what to audit but also for controlling the archive as well as providing more flexibility for experimenting/adjusting if we need to. For example, if we have questions about a particular capability, we could adjust the artifact directly as opposed to finding a representative example that is already made. Again, the advantages might sound great but if it is a lot of lift to build this out, then I'm happy to find examples and simply point to them instead.

trallard commented 1 month ago

@pavithraes we could perhaps reuse some of the examples and official tutorial bits. I don't believe this would be a huge effort but can I have a check here from you? I think a single page/notebook and if needed a simple deployment could be done in a couple of days. Is my assessment correct?

frankelavsky commented 1 month ago

I'd suggest prioritising standard reusable components regardless of the final context in which the plots are presented

Wanted to write that this is great and your suggested starting places for the audit look good. A few comments/questions:

  1. The plotting interface: Are the marks produced all always on a canvas element? Is there an "svg" mode/regl or equivalent?
  2. Is there a meaningful difference between different chart/plot types that we want to include, or would a scatterplot-only audit be just as useful as a scatterplot + line + small multiples + grouped bars + etc in an audit? (once we get to building stuff out with data navigator or otherwise, the differences will certainly matter but auditing between them might not be)
  3. Are there any custom interactions that wouldn't be what was listed, but could still be somewhat common use-cases, like cross-filtering, selecting/clicking elements, etc?

My instinct with the second point here is that even though we can anticipate we will want to treat the navigation design between something like a scatter or a group of bars differently, keeping track of the audit results between them might not be as informative for now. That being said, for the sake of a comparative audit at the end, it might be awesome to show that different data structures are now handled from a navigation perspective that weren't initially.

frankelavsky commented 1 month ago

Also, I plan to assemble a simple, example piece of audit "evidence" on the Plot Tools just to start the discussion on formatting. It'll be a real test, even though the file+format itself is open to discussion.

Since we will get a lot of evidence through the process, it's helpful to go over the format and settle on a template we like. Generally, it will have an organization to it like "Summary/explanation of failure" (with links to the guidelines or standards used in the test), "Video/image proof," "Steps to reproduce," "Expected results," links to the artifact itself that was evaluated, and then any technical details (like whatever screen reader or browser I was using).

trallard commented 1 month ago

I forgot Pavithra was on PTO, so I checked with other folks. We can easily set up a self-contained set of examples and make them available as a dashboard pretty easily.

For example, if we have questions about a particular capability, we could adjust the artifact directly as opposed to finding a representative example that is already made.

I like this, and I would like to have this in a separate repo as we did at https://github.com/Quansight-Labs/JupyterLab-user-testing That way, we can not only iterate on the artefact but also keep track of dependencies and the audit outputs there (and eventually transfer the whole repo to the Bokeh organisation in GH). It would also allow other people to build from and reproduce our setup and workflows if needed (reproducibility for the win!). So, I will get this set up next week and add you all.

Is there a meaningful difference between different chart/plot types that we want to include, or would a scatterplot-only audit be just as useful as a scatterplot + line + small multiples + grouped bars + etc in an audit?

My instinct with the second point here is that even though we can anticipate we will want to treat the navigation design between something like a scatter or a group of bars differently, keeping track of the audit results between them might not be as informative for now.

For this initial scope and audit, I do not think we need to examine a wide variety of plots; one or a maximum of two would be perfectly fine for now as I mostly anticipate we or mostly you would encounter the same interactions and barriers across most of the plots.

That being said, for the sake of a comparative audit at the end, it might be awesome to show that different data structures are now handled from a navigation perspective that weren't initially.

🔥 absolutely!

Are there any custom interactions that wouldn't be what was listed but could still be somewhat common use cases, like cross-filtering, selecting/clicking elements, etc?

yes, here is where I would put stuff like:

In general, panning, zooming, clicking, and dragging are available by default.

And I almost forgot: there are some keybindings associated with Edit tools

Also, I plan to assemble a simple, example piece of audit "evidence" on the Plot Tools just to start the discussion on formatting. It'll be a real test, even though the file+format itself is open to discussion.

This sounds great looking forward to it.

The plotting interface: Are the marks produced all always on a canvas element? Is there an "svg" mode/regl or equivalent?

As the default, yes. There are, however, svg and png exporters. But I am not 100% sure so will need someone to confirm or I will have to double check.

mattpap commented 1 month ago

The plotting interface: Are the marks produced all always on a canvas element? Is there an "svg" mode/regl or equivalent?

There are three output backends: canvas, svg and webgl (see Plot.output_backend):

(...), however, svg and png exporters. (...)

SVG and PNG export works by running code in a (preferably) headless web browser and capturing a screenshot of the relevant page region, thus it's implementation independent. However, save and copy (to clipboard) tools are implementation dependent, and can only capture what's painted onto <canvas> or <svg> layers, but not regular DOM elements. This adds a lot of complexity, because if we want to support selectable text via DOM elements, then we still need to be able to paint it to the canvas regardless. For example, in the case of "math text" we use MathJax with operates on DOM/CSS nodes, then generates SVG output, which then is painted onto <canvas>.

Also for future reference, when I'm saying painting onto <canvas>, that includes rendering <svg>, because svg backend uses HTML5 canvas API (painting code paths are the same for both backends) and translates those API calls to SVG elements.

mattpap commented 1 month ago

Are there any custom interactions that wouldn't be what was listed, but could still be somewhat common use-cases, like cross-filtering, selecting/clicking elements, etc?

Interactivity of plots consists predominantly of tools that act on the canvas, the (cartesian) frame and certain renderers like axes, or a combination of these. Additionally certain annotations can be interactive like clickable legends (e.g. see here) and editable/interactive annotations like BoxAnnotation or Span (for example used in persistent selections; see here).

Side note, the naming convention is quite inconsistent across bokeh models and is a side effect of historical artifacts, naming clashes (we use a flat namespace for most models, except of user defined and some experimental sub-modules). Thus we get e.g.BoxAnnotation and Span, both which are annotations. Certain sub-modules like bokeh.models.tools all have Tool suffix, but as a counter example almost all glyphs do not have Glyph suffix (glyphs with Glyph suffix are a very recent addition due to conflicts).

Tools are a great source of various accessibility issues. Discoverability is bad (e.g. try to notice an arrow when hovering over tool buttons that support context menus; you need to press to show the associated menu). Keyboard support is very bad, with multiple active tools working at the same time, which results in confusing results at best. Then again, discoverability is bad or rather non-existent (see relevant issue https://github.com/bokeh/bokeh/issues/8512). The most hopeless case are edit tools, which we previously decided need a redesign.

Interacting with glyphs via tools or with annotations directly can be quite tricky, e.g. for small entities, because hit regions are too small and often there are no visual cues if the user is hitting anything relevant with their pointer device.

Touch or mobile device experience is bad, primarily because we hardly ever have resources to work in that area. I sometimes test and do some work there, but it's not a consistent effort. Thus this can be a great source of all sort of issue related to accessibility or general usability.

I will keep updating this comment as I recall more relevant problematic areas.

frankelavsky commented 1 month ago

Just commenting to note that I've read the above - really helpful context all around, thanks to both of you @trallard and @mattpap.

A note on canvas is that some of the issues will be "cleaner" than ones created by SVG-based rendering (which can suddenly run into strange browser-level support for things like text navigation). I think we can approach just canvas rendering for now, before we decide if a separate SVG-based audit is worthwhile. Likely the solution (if we go with Data Navigator) will be agnostic about the rendering choice anyway.

frankelavsky commented 1 month ago

As for exporting formats, that is a whole separate discussion from rendering, I think. Folks I've worked with in the past were passionate about accessible exporting, and basically only SVG has potential for that because it is markup (raster exports are just pixel data, essentially).

But I'm not too keen on including accessible image exporting in the scope of our work (certainly doubt auditing this would be helpful at the moment), but I'm open to considering this down the road once we start strategizing and prioritizing the various approaches we want to take.

trallard commented 1 month ago

+1 on approaching canvas rendering for now

But I'm not too keen on including accessible image exporting in the scope of our work (certainly doubt auditing this would be helpful at the moment), but I'm open to considering this down the road once we start strategizing and prioritizing the various approaches we want to take.

💯 agree here, I do not think we should include exporting in our scope. At least until we have a better grasp of our current accessibility baseline and work needed

frankelavsky commented 1 week ago

Hi all, just a note that Plot Tools has been "finished" as far as collecting evidence is concerned. See: https://github.com/Quansight-Labs/bokeh-a11y-audit/pull/5

As for the next thing to audit - any priority items? I'm thinking of just going for the plotting interface next, but if it would make sense to do the "smaller" pieces first (axis + annotations), then I'm happy to move on to those instead.

As far as axis and annotations are concerned: does the demo contain everything we'd want to include in regards to those two items? cc'ing @pavithraes in particular on that. If not, I can move to plotting next while we think about what/if to add for those.

trallard commented 1 week ago

As for the next thing to audit - any priority items? I'm thinking of just going for the plotting interface next, but if it would make sense to do the "smaller" pieces first (axis + annotations), then I'm happy to move on to those instead.

This sounds good; let's move on to the interface (which includes axes 😬)

As far as axis and annotations are concerned: does the demo contain everything we'd want to include in regards to those two items?

yes, this should all be included there

frankelavsky commented 1 week ago

Sounds good! 🫡