w3c / aria-at

Assistive Technology ARIA Experience Assessment
https://aria-at.netlify.app
Other
156 stars 28 forks source link

Aria-AT Use Cases #102

Open isaacdurazo opened 4 years ago

isaacdurazo commented 4 years ago

This is the first draft of 3 different use cases for the Aria AT Test Runner that I was able to identify while conducting a series of conversations with @spectranaut, @mfairchild365 and @mcking65. These include basic user flows as well as alternative flows for some scenarios. I would also like to include Use Cases for the Report itself in the near future.

My intention is to post this in the form of a Wiki page as soon as I get write access to the repo (I've requested this and I'm waiting for the invite). In the meantime, I would love to get feedback and discuss what folks think about this document.

Aria-AT Use Cases

A Use Case is defined as a written description of a series of tasks performed by a user. It begins with the user’s goal and ends with said goal being met. It outlines, from the user’s perspective, how the system should respond or react to certain requests and interactions.

The benefit of use cases is the added value of identifying how an application should behave and what could go wrong. They also provide a better understanding of the users’ goals, helping define the complexity of the application and the advantage of a better identification of requirement.

This document is the result of a series of conversations conducted with ARIA-AT contributors and stakeholders. It will serve as the foundation for defining the requirements and user interface for the project.

Aria AT Test Runner use cases

Use Case 1

Use Case 1 Admin adds tests to Test Runner
Actor Admin (Aria AT member)
Use Case Overview After tests have been designed and reviewed, the Admin prioritize and adds them to the system for testers to be executed. After that, the Admin will review them and later publish them
Trigger Contributors have designed new tests that have been reviewed and are ready to be executed.
Precondition test contributions went through a review process.
Use Case 1 - Basic flow Add tests to Test Runner
Description This is the main success scenario. It describes the situation where only adding and assigning tests to be executed are required.
1 Admin prioritizes which patterns need to be tested in what order
2 Admin adds tests to the system
3 Admin submit/assign tests to testers

Use Case 2

Use Case 2 Tester executes test
Actor Tester (Accessibility QA contractors, Aria-AT community members)
Use Case Overview Once tests have been prioritized and added to the system, at least two testers will execute each of these and submit them for review.
Trigger The Admin has added tests to the pipeline that need to be executed.
Precondition 1 Tests have been prioritized.
Precondition 2 Tests have been added to the pipeline.
Use Case 2 - Basic flow Execute test
Description This is the main success scenario. It describes the situation where only executing and submitting a test are required.
1 Tester provides information about what browser and screen reader combination they will be using.
2 Tester gets a set of tests according to the browser and screen reader combination they will be using.
3 Tester opens the test.
4 Tester reads the instructions.
5 Tester follows the steps to execute the test.
6 Tester submits the test for review.
Use Case 2 - Alternative Flow 4A Instructions are not clear to tester
Description This scenario describes the situation where the testers doesn’t understand the instructions.
4A1 Tester reads the instructions.
4A2 Tester is confused about the instructions.
4A3 Tester submits a question to the Admin with regard to the instructions of the test that needs to be executed.
4A4 Tester receives answer from the Admin.
Use Case 2 - Alternative Flow 5A Tester doesn’t have enough time to finish
Description This scenario describes the situation where the testers, for whatever reason, doesn’t have enough time to finish executing a test.
5A1 Tester follows the steps to execute the test
5A2 Tester needs to make a pause for whatever reason
5A3 Tester saves their work in progress
Use Case 2 - Alternative Flow 5B Tester return to the application to finish the execution of a test
Description This scenario describes the situation where the tester has returned to the application to finish the execution of a test that is in progress.
5B1 Tester opens test that has been partially executed.
5B2 Tester continues following the steps to execute the test.
5B3 Tester submits the test for review.

Use Case 3

Use case 3 Admin Publishes Test Results
Actor Admin (Aria AT member)
Use Case Overview Once at least two testers have executed a given test, the results of this one goes to a draft mode where the Admin reviews them and later publishes them.
Trigger At least two testers have executed a test and its results are ready to be reviewed.
Precondition 1 At least two testers have executed the test.
Precondition 2 The test results are in draft mode.
Use Case 3 - Basic flow Publish test results
Description This is the main success scenario. It describes the situation where only minimal review and publishing the results of a test are required.
1 Admin reviews and compares the results of a test that was executed.
2 Admin chooses the correct results.
3 Admin publishes the results.
Use Case 3 - Alternative flow 1A Test results are wrong
Description This scenario describes the situation where the results of the execution of a test are incorrect and need to be executed again.
1A1 Admin reviews the results of a test that was execute.
1A2 Admin finds out that the results are incorrect.
1A3 Admin removes test results from draft mode.
1A4 Admin adds test to the pipeline to be executed again.
mcking65 commented 4 years ago

General feedback:

The use cases would be easier to specify if we have more precise language. So, I think we need to define some terms:

Use case 1 feedback

After tests have been designed and reviewed, the Admin prioritize and adds them to the system for testers to be executed. After that, the Admin will review them and later publish them 

Using the above terms, I would rewrite this as:

After test plans have been designed and reviewed, the Admin creates a test run suite for each test plan. A suite consists of a run of the test plan for each AT/Browser combination. The admin also prioritizes test run suites. After testers execute the runs, the Admin manages the review and publication process for each test run report.

What do we mean by prioritize? I think this means sequence rather than having buckets of priorities, e.g., hi/med/low.

A test run would be executing a test plan for a single AT/Browser combination. Thus with our current scope, each test plan will have 6 test runs, making up a test suite. The system could automatically sequence runs within a suite. For instance, when we configure a test round with a specific list of browser and assistive technology versions, we could specify their priority sequence.

So, the admin would prioritize or sequence suites, not individual runs. That is, the admin would say I want all the tests for this checkbox example done, then for this combobox, then for a different combobox, and so on. Put more simply, it is like prioritizing or sequencing test plans.

Admin submit/assign tests to testers

By default, test should all be "up for grabs". So, assigning to a tester should be optional step. I can imagine scenarios where we want to assign a particular test to a particular person so another person does not grab it. So, the ability to assign is important.

Since we need 2 people to grab each test, a test should be up for grabs until it has two assignees.

I can imagine that an individual tester may only run tests for a specific set of browser/AT combinations. We may want something in a tester's profile to say which ones they can run. Then, the admin cannot assign the wrong person by mistake. Or, we could even auto-assign people based on that field in their profile and their current backlog, and perhaps an availability field in their profile.

Initially, if we don't have tester profiles that specify such things, we may need to do one of:

  1. Rely on testers self-assigning
  2. Admins knowing very well the skills and availability of each tester
  3. Making assignments in some kind of planning meeting.

Use case 2 feedback

Tester executes test 

Need to be specific that this for a single AT/Browser combo. May this should be tester executes test run? Or, do we just mean individual test in a test run? I think only test runs.

I think testers should only see runs in their queue, not individual tests. And, the run should have a status showing x of y tests complete. Opening the run could automatically show the page for the first incomplete test in the run.

Precondition 1: Tests have been prioritized.

I think this should be test runs have been prioritized. Within a test run, the test plan specifies the sequence of tests.

Precondition 2: Tests have been added to the pipeline.

I think this should be test runs have been added to the pipeline. Seems like these preconditions should be reversed -- 1 should be 2 and 2 should be 1.

Use Case 2 - Basic flow: Execute test

I think we mean test run here.

This is the main success scenario. It describes the situation where only executing and submitting a test are required.

I would rewrite as:

This is the main success scenario. It describes the situation where only executing and submitting the tests in a test run is required. 

Then, the first step needs to be that the tester needs to choose a test run from their queue. If the tester is not assigned a test run, then the tester would grab the first test run that is up for grabs that matches the AT/Browser combination that the tester is prepared to use. The tester's queue could automatically be filtered based on the browser in use.

1: Tester provides information about what browser and screen reader combination they will be using.

Note that we need to word use cases based on the long term scope, so use the word "assistive technology" instead of "screen reader" where appropriate.

That said, we don't need this step. The tester will choose a test run, e.g., "Checkbox tests in Chrome with JAWS", which specifies both the browser and AT.

As I described above, given Glen's feedback, at a given point in the project timeline, We may only want the checkbox tests performed with a specific version of JAWS, e.g., JAWS 2020.1912.11. So, the test admin may set up the run so it specifies both the browser and the exact version of the assistive technology. So, we may need this step to ask the tester to verify they are using the correct version of the AT.

That is, this step could be:

Tester verifies the version of the assistive technology being used exactly matches the version required and that it is running with a default configuration as specified on the "Test setup requirements" page of the wiki.

Next:

3 Tester opens the test.

You don't need this step because the previous step says:

2: Tester gets a set of tests according to the browser and screen reader combination they will be using.

Perhaps you could merge these two into a single step that is:

Tester is presented with the first incomplete test in the sequence of tests in the test run

6: Tester submits the test for review.

The tester does their own review of what they have saved. I think the steps are, as we have in the current runner that the tester follows the steps, saves and previews results, then goes to the next test in the test run. After all tests are complete, the tester submits the test run.

Use case 3

Admin Publishes Test Results

I wonder what level of gramularity we want here. I think we will only publish results for complete runs. That is the only unit that is worth reviewing by an AT developer.

On the other hand, say an AT developer fixes a bug that is only supposed to fix behaviors in only one test in an entire run. I wonder if we should always run the entire run and republish the entire run? Seems like that would be necessary. If we didn't, then unintended side effects of a bug fix would not be caught.

So, I'm thinking that we only publish complete test runs, and we only review complete test runs with an AT developer. Thus, a test run needs to be complete, i.e., all tests in the plan have been run for the AT/Browser combo, before we would review it. We should specify this in the process (#41).

This scenario describes the situation where the results of the execution of a test are incorrect and need to be executed again. 

We might want a specific test within a test run re-run. Should their be the ability for the Admin to remove results from that single test only, which would then make the run incomplete, and then put the test run back in the tester's queue? In this case, it would show the tester that 15 of 16 tests are complete for example. Opening the run would go directly to the incomplete test. Perhaps there could be a note from the admin at the top of the test.

isaacdurazo commented 4 years ago

Thanks for the thoughtful feedback, @mcking65! I've now incorporated your suggestions and expanded the use cases with two more alternative flows that include: 1) requiring a specific AT version when submitting a test run to testers and 2) an option for selecting a group of testers depending on the needs of the test run created by the test admin. I've also made several improvements to the wording to make it consistent with the Working Mode Document.

This use cases live now in the wiki page. Let's keep this issue to continue the discussion.

zcorpan commented 4 years ago

There's also a wiki page for "high-level use cases". Should these 2 pages be merged? Or should they be separate but be renamed?