Open jscholes opened 3 years ago
But there is a risk that any steps carried out by a previous script invocation won't be sufficiently undone, such as hiding an element or changing the accessible name of something. Note that we don't currently do the latter in any tests, but we may in the future.
Do you have an example of where this may be the case or is this a more hypothetical concern?
@robfentress
Do you have an example of where this may be the case
Changes are made to pages by setup scripts in a number of our test plans to date. E.g.:
Actionable next step for myself: write up an APG issue explaining current problem and suggested route forward. Namely, a global object for interacting with APG components.
Do we need a "tear down"/"reset to known good state" mechanism?
The consensus from the March 4, 2021 community group meeting seemed to be: yes, we do need a mechanism for resetting page state between commands. But if we try to explicitly create this, it will complicated the test writing process and most likely leave out edge cases anyway.
As such, @mfairchild365 suggested the following approach:
autofocus
attribute.autofocus
attribute will ensure that it receives focus.With the above in mind, a tester's journey through a test will look like:
Another thing I love about this approach is that the "shortcut" for the reset button is simply use the browser refresh key. So, the process is explicitly defined on the page, but the experienced tester can easily use the shortcut if they prefer.
As I wrote today in https://github.com/w3c/aria-at/pull/450#issuecomment-920347722, I think we need to change the process of how test plans are created to provide a way to reset a test. A technical solution without a process solution would be hard to create and maintain and likely buggy.
I think there are two non-exclusive process solutions.
The first process solution is to use a copy of the reference page in place of each setup script. Instead of a script modifying the page in the browser a copy of the reference page with those modifications is made by the test author and used with each relevant individual test. Any scripting still needed, such as calling focus
method on an element, would be done by the reference copy for that test.
The second process solution is to add a small inline script into the head element of the reference page. This script calls a predetermined callback on the parent window. In effect this script emits an event like the load event but the difference is how the listener is setup. Listeners the parent adds to the test page do not apply to the reloaded test page. A callback on the parent window instead can be set once and the child test page window can call as desired.
I think in either case a change in process is needed. Knowing how to setup or reset a test's reference page is deeply related to that specific reference page and test plan.
The first process solution is to use a copy of the reference page in place of each setup script.
This is a non-starter. It would add a ton of extra work, not only when creating the tests but also when modifying them, because there would be multiple copies of the entire page.
The second process solution is to add a small inline script into the head element of the reference page.
Question: why can't the head section just contain a direct reference to the setup script on the server, plus some code to run it when the button is clicked? Then the example page would be self-contained.
This aspect of the parent window is what is concerning me the most, because it creates a dependency on the page invoking the example, and that's why we're struggling to refresh it. Why is the parent window, i.e. the test runner, 100% required by the example page? Is it so we can close the window automatically when someone navigates to another test, as requested in last week's community group meeting?
@jscholes I think we would benefit from a discussion of these approaches before writing anything off wholesale. @mzgoddard has put a lot of thought into the architecture here (as I know you have too) and I think he's aware of, and interested in discussing the tradeoffs inherent in the tension between self-contained tests (for simplicity's sake), code-reuse (to lower the cost of contributing new test plans), and modularity (for the sake of technical flexibility / upgradability).
I'll send an email to set up some time for an audio call where we can discuss the above issues in a bit more depth.
On a recent ARIA-AT CG call, we were discussing how page state should be reset between tests (see #358). Tied into that discussion were some thoughts about possible improvements to setup scripts, both in terms of how they are written and executed: