Open isaacdurazo opened 2 years ago
@isaacdurazo I like it! Some thoughts:
Candidate Test Plans column: the header cell of this column proposes a tabbed navigation
Is the proposal that a table column header cell will contain a tab list? That is extremely unusual from a semantic perspective, and raises concerns for me about cognitive load. In addition and most critically, the names of the two tabs will bubble up to become the accessible name of the cell itself, meaning that as a screen reader user navigates across columns, they will hear both tab names announced.
If it is intended that a user will be able to switch between two table views, the tab list should be before the table and not inside it.
X number of open issues
Is this the number of issues for the plan in total, or is it scoped to the vendor user who is signed in?
Ok: once an AT developer approves a Candidate test plan an "Ok" label is displayed next to its name
I think this should be more explicit. For me, "Ok" without additional context doesn't get across the intended meaning, i.e. that the vendor has approved it. How about just using the word "Approved", as is the case on the single test plan page?
Note: this does relate to my previous question. If, say, Vispero have approved a plan, but issues raised by Apple are still open, will the Vispero representative see the approved status on its own? Or will it be accompanied by the number of open issues too?
The cell contents of these columns could either be the word "None" if there are no test results
Alternatively, is there a downside to only showing columns for the combinations that were tested?
Add your Review: Clicking this opens a Dropdown with three checkboxes to choose from depending on the type of review and text area. The three checkbox options are: Approve, Provide Feedback, and Request Changes.
These seem like mutually exclusive options (e.g. approving while requesting changes doesn't make sense as a use case). So should they be radios?
Actions: There are four actions in this row, "Open Test", "Raise an Issue about this Test", "Leave Feedback" and "File an AT bug".
I feel like maybe this has been discussed, but I'm not sure the distinction between raising an issue and leaving feedback will be clear to the vendors. I'm not sure I remember that distinction (if one was determined).
@jscholes thanks a lot for the feedback! My response to your comments:
Is the proposal that a table column header cell will contain a tab list?
Yes, that's my proposal, but after reviewing your commend and concern I realize that yes, we should move away from this. I think having the tabbing outside the table should work.
Is this the number of issues for the plan in total, or is it scoped to the vendor user who is signed in?
It is the number of issues for the plan in total. These issues will be properly labeled on Github, which is not defined yet what labels we will be using.
think this should be more explicit. For me, "Ok" without additional context doesn't get across the intended meaning, i.e. that the vendor has approved it. How about just using the word "Approved", as is the case on the single test plan page?
I see your point, let's propose "Approved".
Note: this does relate to my previous question. If, say, Vispero have approved a plan, but issues raised by Apple are still open, will the Vispero representative see the approved status on its own? Or will it be accompanied by the number of open issues too?
Great question! The case you have stated did come up to me while I was working on these mockups and was hoping we all can come up with the best solution. My thinking right now is that if let's say, Vispero approved a Test Plan and then someone else raises an issue, the "Approval" label should be removed and reinstated once those issues have been resolved.
Alternatively, is there a downside to only showing columns for the combinations that were tested?
Not at all, but we have all Candidate Test Plans in the same table and there might be a case where maybe one of the Test Plans didn't get tested with a particular AT and Browser combination. Since they all share the same table, we wouldn't be able to display the progress bar for that Test Plan that didn't get tested with said AT and Browser. Does that make sense?
These seem like mutually exclusive options (e.g. approving while requesting changes doesn't make sense as a use case). So should they be radios?
Whoops, that was an oversight on my end. You are 100% right. These should be radio buttons.
I feel like maybe this has been discussed, but I'm not sure the distinction between raising an issue and leaving feedback will be clear to the vendors. I'm not sure I remember that distinction (if one was determined).
The distinction is that raising an issue blocks approval, while leaving feedback doesn't. It's good that you are bringing this up. We should figure out how to add more clarity to it.
I don't quite understand the intent of the top-level table. At this point, we have 3 screen readers, later there will be more screen readers. At the highest level, we are concerned about tracking the plan review status for each screen reader. Browsers do not matter. The table describes the status as being in the first column with the plan name, but we actually need a status for each screen reader. For disclosure nav menu, we need to know for each screen reader: how many issues raised, is approved, etc.
@isaacdurazo
My thinking right now is that if let's say, Vispero approved a Test Plan and then someone else raises an issue, the "Approval" label should be removed and reinstated once those issues have been resolved.
Makes sense to me. However, we should consider the case of a vendor not being actively involved in the process, either at all or to the degree that we'd like. At some point, if they don't approve a given test plan, it will still need to continue to the next phase regardless. And, we don't want to suggest that one vendor approving a plan implies approval from another who is actively invested.
we have all Candidate Test Plans in the same table and there might be a case where maybe one of the Test Plans didn't get tested with a particular AT and Browser combination. Since they all share the same table, ...
Great point. That completely hadn't occured to me.
After consideration of feedback and discussion at the last two ARIA-AT CG meetings, @isaacdurazo is proposing the following changes to the above mockups. The mockups will be updated to reflect these changes soon.
We're looking forward to discussing these changes in more detail at the upcoming CG meeting. We're happy to provide additional context and considerations for these choices or to consider other options for these design challenges.
Thank you @s3ththompson for explaining the last changes we would like to propose. Here is the updated version of both visual and text-based mockups.
This is the page where an AT Developer begins a Test Plan review process. It has a table for every AT that lists all Test Plans in the Candidate phase.
Heading: Comprised of a <h1>
with the name of the page. I'm using "Candidate Test Plans" as a placeholder.
Introduction: Comprised of an <h2>
with the word "Introduction" and a <p>
with a short description of the page.
Candidate Test Plans table: As mentioned before, there is a table for every available AT. This table has one column for Candidate Test Plans and one for the review status.
Candidate Test Plans column: the header cell of this column reads Candidate Test Plans. The Test Plan names in this column are links that start the review process when clicked.
Review Status column: There is three status depending on the step in which the review process is at:
Ready for Review: When a Test Plan reaches the Candidate phase, it would automatically be added to this page and a "Ready for Review" label is displayed next to its name.
X number of open issues: If an AT Developer requests changes for a Test Plan, that request culminates in the creation of a Github Issue. This is also reflected in this label so users can see how many issues there are for a given Candidate Test Plan.
Approved: once an AT developer approves a Candidate test plan an "Approved" label is displayed next to its name.
Once the AT Developer selects a Candidate Test Plan they would land on this page. Here we have a header containing UI elements to display different statuses. The main area resembles the Test Run page with a left-hand navigation section with an ordered list of all tests and full test material on the right
Header: The header on this page is comprised of 3 different elements, an <h1>
, a row for different "review status" indicators, and another row with actions. This Header is persistent on each test page. These are the details:
Heading: Comprised of a <h1>
with the name of the Test Plan, e.g. "Disclosure Navigation Menu Example" followed by a label with the number of open issues if any.
Status row: There are 3 different "status" indicators in this row. One for the phase in which the Test Plan is (Candidate in this case), another one for the number of open issues, and lastly one for the kind of review an AT Developer might have left. For the latter, there are 3 different kinds:
Approved: When an AT Developer approves a Candidate Test Plan, we display a green check mark icon followed by the text [username] Approved this Test Plan.
Requested Changes: When an AT Developer requests changes on a Candidate Test Plan, we display a red x icon followed by the text: [username] requested changes.
Left Comments: When an AT Developer leaves feedback on a Candidate Test Plan, we display a blue bubble text icon followed by the text: [username] Left comments.
Actions: There are two different actions an AT Developer can take from here to complete their review of a Candidate Test Plan
Design Pattern: Takes the user to the Test Plan's Design Pattern page on the Aria Authoring Practices website.
Design Pattern Example: Takes the user to the Test Plan's Design Pattern Example page in the Aria Authoring Practices website.
Main Area: As mentioned before, the main area of this page is identical to the current Test Run page with a few additions:
Test Navigator: This ordered list of tests has an icon on the left for each test to indicate progress.
Not reviewed: This means the reviewer hasn't looked at this test yet. The icon is a gray circle.
Reviewed: This means the reviewer has looked at this test and navigated to the next one using the "Next button" at the bottom of the page. The icon is a green circle with a white checkmark on it.
Issue raised: When the reviewer has raised an issue with a particular test: The icon is an orange circle with a white exclamation mark.
Test Material: This area contains the same materials from the Test Run page: instructions, test page link, commands, and assertions. All the output and results are visible as read-only. Next to the main heading for every test two buttons have been added: "Raise and issue" and "File an AT bug"
When the reviewer reaches the last test in a Candidate Test Plan and clicks "Finish Review", they are prompted with a Modal with the following content.
Heading: This reads: Great, [username]! You have reviewed every test in the [Candidate Test Plan name] with [Assistive Technology]
Summary: If issues have been raised along the way, this area will summarize that: E.g. You have raised 2 issues for this Test plan
Finish your review: This area lists three checkboxes to choose from depending on the type of review and a text area. The three checkbox options are: Approve, Provide Feedback, and Request Changes. Each of these labels has a short one-line description underneath.
Overall, sounds like a simple, clear design with straightforward calls for action.
I like that there is space for an introductory description.
I like that at a high level, it is organized by AT so the rep for each AT has a clear place to focus. This structure can work for 3 AT but also scales well up to 6 or 12 AT.
I have several suggestions:
<h2>JAWS Test Plan Reviews by Vispero</h2>
.With respect to plan status, I am concerned about the word "issue". It is likely that AT vendors could confuse "issues" with "test failures". Let's work on ways of labeling information that mitigates the risk of such confusion.
Perhaps we could refer to the GitHub issues raised by AT developers as "Test Feedback". If we can distinguish between "general feedback" and "changes requested" that would be even better.
If an AT developer has read some or all of a test plan but not filed any feedback, would we still show the status of "ready for review"? Or, would we show a status like "Partially Read" or "Read" or "Review In Progress"?
Here are suggestions for allowed plan review status values:
The description of main content says:
The main content of this page has a table listing every Assistive Technology used for every Candidate Test Plan and a column for each Browser used for each AT.
Is that a copy/paste error? I don't understand it. This page is about one test in one plan with results for that test in one AT. It should not show data for multiple AT nor for multiple plans.
That AT may have candidate test results from more than one version of the AT and the test could have been run with that AT in more than one browser. For example, for JAWS, the disclosure menu test plan could have results for the test of navigating forward to the menu for two versions of JAWS in both Chrome and Firefox. This has consequences discussed below.
The description of the header says:
Heading: Comprised of a
<h1>
with the name of the Test Plan, e.g. "Disclosure Navigation Menu Example" followed by a label with the number of open issues
I don't think we need the plan name in the H1 text. The H1 text needs to indicate which test is being reviewed just like in the test runner, e.g.:
Testing task:1. Navigate to the first unchecked radio button in a group in reading mode
Although, I'd prefer different wording from the runner. I recommend:
Review Test 1 of 26: Navigate to the first unchecked radio button in a group in reading mode
I don't think that we should put an issue count in the H1 text. I am concerned that people will not understand what that means. Further, it is unlikely that a vendor will open multiple GitHub issues for a single test in a plan. In fact, we might not want more than one issue from a given vendor for a given test in a given plan.
Nevertheless, in the H1, I think it would be helpful to also indicate both which AT the plan is for and the current read/feedback status for the test. Note that the status for a test is not a "Review Status" because the review status is for the plan, not the test. We should not imply that we are asking for "approval" on every test.
The status values we could use for the test in the H1 as well as in the left navigation could be:
The header description says:
Status row: There are 3 different "status" indicators in this row. One for the phase in which the Test Plan is (Candidate in this case), another one for the number of open issues, and lastly one for the kind of review an AT Developer might have left. For the latter, there are 3 different kinds:
- Approved: When an AT Developer approves a Candidate Test Plan, we display a green check mark icon followed by the text [username] Approved this Test Plan.
- Requested Changes: When an AT Developer requests changes on a Candidate Test Plan, we display a red x icon followed by the text: [username] requested changes.
- Left Comments: When an AT Developer leaves feedback on a Candidate Test Plan, we display a blue bubble text icon followed by the text: [username] Left comments.
I think we should have a slightly more verbose header to avoid confusion. Clarity trumps brevity here.
Here is a suggestion for header content:
<h1>Review Test I of N: TEST_NAME using AT_NAME (READ_STATUS)</h1>
<ul>
<li>Test I in Candidate Test Plan: TEST_PLAN_NAME</li>
<li>Status of candidate review by VENDOR_NAME: PLAN_REVIEW_STATUS</li>
<li>Target candidate review phase completion: TARGET_DATE</li>
<li>Feedback from VENDOR_NAME:
<ul>
<li>STATE_OF_FEEDBACK_FOR_TEST</li>
<li> STATE_OF_FEEDBACK_FOR_PLAN for the tests in this plan</li>
</ul>
</li>
</ul>
For example:
<h1>Review Test 1 of 26: Navigate to the first unchecked radio button in a group in reading mode using JAWS (Previously Viewed)</h1>
<ul>
<li>Test 1 in Candidate Test Plan: Radio Group Example Using aria-activedescendant</li>
<li>Status of candidate review by Vispero: In Progress</li>
<li>Target candidate review phase completion: December 31, 2022</li>
<li>Feedback from Vispero:
<ul>
<li><a href="link-to-issue">Roxana Fischer requested changes to this test</a><li></li>
<li>Feedback filed for 3 tests in this plan</li>
</ul>
</li>
</ul>
The description of the main area says:
Test Material: This area contains the same materials from the Test Run page: instructions, test page link, commands, and assertions. All the output and results are visible as read-only.
Please do not show a read-only form. That is crazy hard to read with a screen reader because you have to read all the inputs to figure out the results. Please use the static table results format that is shown in the runner when reviewing completed tests. Of course, you need to also include the instructions and button to open the test case.
BTW, in the test runner when reviewing completed tests, we really, really need the instructions and button to open the test case. Having to go into edit mode to get to those elements is a serious drag. I ran into this problem constantly when working on conflicts. I needed to run the test to assess a plan of action, and the only way to get access to the test was to go into edit mode.
The description says:
Next to the main heading for every test two buttons have been added: "Raise and issue" and "File an AT bug"
I think you mean for each command, not each test? The entire page is for a single test that has results for one or more commands.
I assume there are also navigation buttons for previous and next?
This makes sense. @isaacdurazo will rework the copy around “feedback” (and/or “changes requested”) instead of “issues”.
You suggested the following values:
Some of these options are not mutually exclusive. For example, “X tests changed since last review” could apply independently of the review status.
I suggest that we separate these options into three discrete indicators:
For now, we’ll keep the test plan report visible in the test queue while it’s in candidate review. Going forward, we need to map out the lifecycle of a report and exactly when and how it moves between different pages and different roles. I believe we have some time scheduled for a deeper discussion there.
We have two followup questions for the additional columns on the AT tables:
@isaacdurazo will update the mockups to reflect this addition. The use case here makes sense.
We’ll reference this markup during implementation.
Noted, good idea.
Thanks for highlighting the accessibility issues with using a read-only form. We can render the submitted result values as plaintext instead of form inputs. Given that, would you prefer:
Both options would include the same information. Is the table easier to navigate? There was some concern that the cells contained too much information for longform readability.
(Also, agreed on adding instructions + open test page button to results summary in test runner)
The two action buttons “raise an issue” (now “raise feedback”) and “file an AT bug” are to the right of the test content. The previous / next buttons are below the test content.
This layout mirrors the layout of action buttons in the test runner view. Your comment suggests creating separate action buttons per-command? I don’t think we have a UX pattern for this. I would suggest starting with the buttons in their usual location (per-test) and re-visiting after we have some feedback from vendors.
We’ll reference this markup during implementation.
Thanks for your thorough review Matt!
Candidate Test Plan Review Process - AT Developer User Journey
Candidate Test Plans Queue
This is the page where an AT Developer begins a Test Plan review process. It has a table listing all Test Plans in the Candidate phase as well as the percentage of passing tests for each AT and Browser combination the Test Plans were run with.
UI Characteristics and details
Heading: Comprised of a
<h1>
with the name of the page. I'm using "Candidate Test Plans" as a placeholder.Introduction: Comprised of an
<h2>
with the word "Introduction" and a<p>
with a short description of the pageCandidate Tests table: This table has one column for Candidate Test Plans and one column for every AT and Browser combination used.
Candidate Test Plans column: the header cell of this column proposes a tabbed navigation, where the AT Developer can switch between Candidates and Recommended Test Plans. These two options are preceded by the number of Test Plans available under each. E.g. "5 Candidates - 3 Recommended". The "active" tab in this header cell is in bold, while the inactive one is in normal text. The Test Plan names in this column are links and next to them there is a label. There are three different labels depending on the step in which the review process is at:
Ready for Review: When a Test Plan reaches the Candidate phase, it would automatically be added to this page and a "Ready for Review" label is displayed next to its name
X number of open issues: If an AT Developer requests changes for a Test Plan, that request culminates in the creation of a Github Issue. This is also reflected in this label so users can see how many issues there are for a given Candidate Test Plan
Ok: once an AT developer approves a Candidate test plan an "Ok" label is displayed next to its name
AT and Browser column: there is a column for every AT and Browser combination available. The cell contents of these columns could either be the word "None" if there are no test results, or a progress bar showing the percentage of passing tests for a given Test Plan under a given AT and Browser combination.
Single Candidate Test Plan
Once the AT Developer selects a Candidate Test Plan they would land on this page. Here we have a header containing UI elements to display different statuses and actions the user can utilize for reviewing. The main content of this page has a table listing every Assistive Technology used for every Candidate Test Plan and a column for each Browser used for each AT.
UI Characteristics and details
Header: The header in this page is comprised of 4 different elements, breadcrumbs, an
<h1>
, a row for different "status" indicators and a row for actions. It is important to mention that from this page the user can navigate two more levels deeper in their user flow while reviewing a Test Plan and its Test Results. This Header is persistent on each page. These are the details:Breadcrumbs: similar to the Reports page, at the top of this page we have breadcrumbs to let the user know where they are at and provide another way to navigate. The breadcrumbs in this page would look like this: Candidate Tests > Disclosure Navigation Menu Example
Heading: Comprised of a
<h1>
with the name of the Test Plan, e.g. "Disclosure Navigation Menu Example".Status row: There are 3 different "status" indicators in this row. One for the phase in which the Test Plan is (Candidate in this case), another one for the number of open issues, and lastly one for the kind of review an AT Developer might have left. For the latter, there are 3 different kinds:
Approved: When an AT Developer approves a Candidate Test Plan, we display a green checkmark icon followed by the text [username] Approved this Test Plan.
Requested Changes: When an AT Developer requests changes on a Candidate Test Plan, we display a red x icon followed by the text: [username] requested changes.
Left Comments: When an AT Developer leaves feedback on a Candidate Test Plan, we display a blue bubble text icon followed by the text: [username] Left comments.
Actions: There are four different actions an AT Developer can take from here to complete their review of a Candidate Test Plan
Open Test Plan Run: Takes the user to the Test Plan Run page. There they are able to navigate back and forth between tests and experience what a Tester goes through when executing a Test Plan.
Design Pattern: Takes the user to the Test Plan's Design Pattern page on the Aria Authoring Practices website.
Design Pattern Example: Takes the user to the Test Plan's Design Pattern Example page in the Aria Authoring Practices website.
Add your Review: Clicking this opens a Dropdown with three checkboxes to choose from depending on the type of review and text area. The three checkbox options are: Approve, Provide Feedback, and Request Changes. Each of these labels has a short one-line description underneath.
Introduction: Comprised of an
<h2>
with the word "Introduction" and a<p>
with a short description of the page.Assistive Technology table: This table has one column for Assistive Technology and one for every Browser used to test the Candidate Test Plan being reviewed.
Assistive Technology column: The Assistive Technologies in this column are links that when clicked, take the user to a report view of that Assistive technology. More detailed information about this is in the following mockup.
Browser column: The cell contents of these columns could either be the word "None" if there are no test results or a progress bar showing the percentage of passing tests for a given Browser and AT.
Single Assistive Technology with multiple browsers
After selecting an Assistive Technology, the user would land on this page, where they can look at a dedicated results table for each AT, which lists each of the tests within the test plan and a column for required and optional assertions, and unexpected behaviors
UI Characteristics and details
Header: The header on this page looks exactly like the one in the previous one with the exception of:
Breadcrumbs: Since the user is one level deeper, the breadcrumbs on this page would look like this: Candidate Tests > Disclosure Navigation Menu Example > JAWS
Heading: For the same reason, the header on this page would look like this: "Disclosure Navigation Menu Example with JAWS".
Introduction: Comprised of an
<h2>
with the word "Introduction" and a<p>
with a short description of the page.Browser table: For every browser used to test the Test Plan being reviewed, a results table is included. This table is complemented above by a heading and an Actions section.
Heading: Comprised of a
<h2>
with the name of the BrowserActions: There are two actions in this row, "View Complete Results" and "[x number] Tests were skipped".
View Complete Results: this takes the user to the deepest level of the report results of a Candidate Test plan, where they have a more detailed view of the results for every single test. More detailed information about this is in the following mockup.
[x number] Tests were skipped: takes the user to a table listing all the skipped tests in the deepest level of the report results of a Candidate Test plan. More detailed information about this is in the following mockup.
Table: This table has a main column for every test and three for results, which are "Required Assertions", "Optional Assertions" and "Unexpected Behaviors".
Test name column: the cells in this column displays the test name, which is a link that when clicked takes the user to a detail view page. More detailed information about this is in the following mockup.
Required Assertions: the cells in this column display the number of passing required assertions of the total, e.g. "3 of 3 passed".
Optional Assertions: the cells in this column display the number of passing optional assertions of the total, e.g. "3 of 3 passed".
Unexpected Behaviors: the cells in this column display the number of unexpected behaviors, e.g. "2 found".
Single Assistive Technology and Browser combination
Lastly in the review process, on this page, the AT Developer can look at a detailed reports table for each Test for a given AT and Browser.
Header: The header on this page looks exactly like the one in the previous one with the exception of:
Breadcrumbs: Since the user is one more level deeper, the breadcrumbs on this page would look like this: Candidate Tests > Disclosure Navigation Menu Example > JAWS > Chrome
Heading: For the same reason, the header on this page would look like this: "Disclosure Navigation Menu Example with JAWS and Chrome".
Introduction: Comprised of an
<h2>
with the word "Introduction" and a<p>
with a short description of the page.Single Test table: For every single test within a Test Plan being reviewed, a results table is included. This table is complemented above by a heading and an Actions section.
Heading: Comprised of a
<h2>
with the name of the test, e.g. "Navigate forwards to a collapsed disclosure button in reading mode"Actions: There are four actions in this row, "Open Test", "Raise an Issue about this Test", "Leave Feedback" and "File an AT bug".
Open Test: Opens the test the user is looking at on the Test Run page
Raise an Issue about this Test: Takes the user to the project repository on Github where they can start from a pre-populated issue based on their selected action
Leave Feedback: Takes the user to the project repository on Github where they can start from a pre-populated issue based on their selected action
File an AT bug: Takes the user to the appropriate place where they can file a bug for an Assistive Technology.
Table: This table has three columns, which are: Command, Support, and Details. This table looks exactly like the one in the reports page.