w3c / aria-at

Assistive Technology ARIA Experience Assessment
https://aria-at.netlify.app
Other
153 stars 28 forks source link

Potential setup script improvements #369

Open jscholes opened 3 years ago

jscholes commented 3 years ago

On a recent ARIA-AT CG call, we were discussing how page state should be reset between tests (see #358). Tied into that discussion were some thoughts about possible improvements to setup scripts, both in terms of how they are written and executed:

robfentress commented 3 years ago

But there is a risk that any steps carried out by a previous script invocation won't be sufficiently undone, such as hiding an element or changing the accessible name of something. Note that we don't currently do the latter in any tests, but we may in the future.

Do you have an example of where this may be the case or is this a more hypothetical concern?

jscholes commented 3 years ago

@robfentress

Do you have an example of where this may be the case

Changes are made to pages by setup scripts in a number of our test plans to date. E.g.:

jscholes commented 3 years ago

Actionable next step for myself: write up an APG issue explaining current problem and suggested route forward. Namely, a global object for interacting with APG components.

jscholes commented 3 years ago

Do we need a "tear down"/"reset to known good state" mechanism?

The consensus from the March 4, 2021 community group meeting seemed to be: yes, we do need a mechanism for resetting page state between commands. But if we try to explicitly create this, it will complicated the test writing process and most likely leave out edge cases anyway.

As such, @mfairchild365 suggested the following approach:

  1. When a test page is initially loaded, provide a button to run the setup script(s). Give it an autofocus attribute.
  2. When that button is activated, execute the setup script(s), and then change the name and function to "Reset" (or something similar e.g. it was discussed whether the labels should be more descriptive; something like "press to reset the page between commands"). Maybe the more descriptive context should be in the accessible description?
  3. When the reset button is activated, completely reload the test page, returning the button to its default state and purpose (i.e. for running the setup script(s)). The autofocus attribute will ensure that it receives focus.

With the above in mind, a tester's journey through a test will look like:

  1. Open the test page.
  2. Activate the button to run the setup script(s).
  3. Carry out the test using the first command.
  4. Return to the button, which now says "Reset", and press it.
  5. When the test page reloads, activate the button to execute the setup script(s) again.
  6. Repeat until all commands have been tested.
mcking65 commented 3 years ago

Another thing I love about this approach is that the "shortcut" for the reset button is simply use the browser refresh key. So, the process is explicitly defined on the page, but the experienced tester can easily use the shortcut if they prefer.

mzgoddard commented 3 years ago

As I wrote today in https://github.com/w3c/aria-at/pull/450#issuecomment-920347722, I think we need to change the process of how test plans are created to provide a way to reset a test. A technical solution without a process solution would be hard to create and maintain and likely buggy.

I think there are two non-exclusive process solutions.

The first process solution is to use a copy of the reference page in place of each setup script. Instead of a script modifying the page in the browser a copy of the reference page with those modifications is made by the test author and used with each relevant individual test. Any scripting still needed, such as calling focus method on an element, would be done by the reference copy for that test.

The second process solution is to add a small inline script into the head element of the reference page. This script calls a predetermined callback on the parent window. In effect this script emits an event like the load event but the difference is how the listener is setup. Listeners the parent adds to the test page do not apply to the reloaded test page. A callback on the parent window instead can be set once and the child test page window can call as desired.

I think in either case a change in process is needed. Knowing how to setup or reset a test's reference page is deeply related to that specific reference page and test plan.

jscholes commented 3 years ago

The first process solution is to use a copy of the reference page in place of each setup script.

This is a non-starter. It would add a ton of extra work, not only when creating the tests but also when modifying them, because there would be multiple copies of the entire page.

The second process solution is to add a small inline script into the head element of the reference page.

Question: why can't the head section just contain a direct reference to the setup script on the server, plus some code to run it when the button is clicked? Then the example page would be self-contained.

This aspect of the parent window is what is concerning me the most, because it creates a dependency on the page invoking the example, and that's why we're struggling to refresh it. Why is the parent window, i.e. the test runner, 100% required by the example page? Is it so we can close the window automatically when someone navigates to another test, as requested in last week's community group meeting?

s3ththompson commented 3 years ago

@jscholes I think we would benefit from a discussion of these approaches before writing anything off wholesale. @mzgoddard has put a lot of thought into the architecture here (as I know you have too) and I think he's aware of, and interested in discussing the tradeoffs inherent in the tension between self-contained tests (for simplicity's sake), code-reuse (to lower the cost of contributing new test plans), and modularity (for the sake of technical flexibility / upgradability).

I'll send an email to set up some time for an audio call where we can discuss the above issues in a bit more depth.