Open isaacdurazo opened 1 year ago
Hi @mcking65 , this is ready for your review
@isaacdurazo @ccanash
Could you please add a link to the spec that @jugglinmike wrote. If I recall correctly, this mockup is not for the MVP but rather for a V2.
The MVP use case we last discussed is identify a completed report that you want to re-run. Typically, this when you want to re-run with a different version of either the AT or browser or both. The automation should generate a new report run in the test queue and assign verdicts to all assertions where the AT response matches the AT response of the originally identified report. Any test where the AT response doesn't match would be identified as having a conflict. I believe Mike fully specified this use case.
The important prerequisite is that a final report has been generated by humans.
One surface where we now have access to final reports at any phase in the project is the report status dialog being made for issue #649. You could add a re-run with bot button there. That would open a dialog where you choose AT version and browser. Note, you would not be able to choose a different AT. It would be useful to choose a different browser. The result would be a new entry in the test queue with the bot results. Everything you designed above would be useful when it comes to viewing the progress of that run and assigning a human to finish the work where there were conflicts.
Hi @mcking65 , here is the link to the spec that @jugglinmike wrote: https://docs.google.com/document/d/1bjXZ21gGFH_rdP6IGcgSohbSNikpPqntV_wX-5nhUx8/edit .
Thank you, I'll try to give this some more attention in the next few days. I think I should prioritize the specs for the refactor project, at least for p0 and p1 requirements. I don't want to hold up @isaacdurazo work on it though, so please give me an idea of timeline that I should target.
Hi @mcking65 Isaac has time scheduled to work on this this week. Would you be able to review it today/tomorrow?. Thanks
Hey there @mcking65, I have returned!
Could you please add a link to the spec that @jugglinmike wrote. If I recall correctly, this mockup is not for the MVP but rather for a V2.
This mockup is for the MVP, and I believe that it almost addresses the use case. Howard and I just today identified one additional capability we need to add.
First, I'll explain how we envisioned Test Admins using this UI to complete the use case. Then, I'll describe the missing piece.
The MVP use case we last discussed is identify a completed report that you want to re-run. Typically, this when you want to re-run with a different version of either the AT or browser or both. The automation should generate a new report run in the test queue and assign verdicts to all assertions where the AT response matches the AT response of the originally identified report. Any test where the AT response doesn't match would be identified as having a conflict. I believe Mike fully specified this use case.
Although the design Isaac has proposed does not include any UI explicitly dedicated to "re-running", I believe it can be used for that purpose.
Whenever Test Admins create Test Plan Runs for Test Plans which have completed reports, they will effectively be re-running that Test Plan. Once the automation jobs are complete, the system will assign assertion verdicts only to the AT responses it recognizes; it will omit verdicts for novel AT responses. From there, Test Admins can review the conflicts from these Test Plan Runs using the same approach they use for review today; the UI we've already designed and implemented for that purpose will tell them which Test Results differ (because they will have different AT responses and they will lack verdicts).
The piece that is missing is the ability to compare Test Plan Runs with different phases. Currently, it's only possible to review conflicts for Test Plans Run in the "Draft" phase. In order for a mock-up like Isaac's to fully satisfy the requirements, it should also include some UI which allows Test Admins to choose which Test Plan Runs they would like to compare (regardless of phase). That way, they could understand how the AT responses collected by the automation system differed from the AT responses from a Test Plan Run in the Candidate or Recommended phase.
Does all of that make sense? Do you think that, after adding some UI for setting the basis of comparison, it will be necessary and sufficient--a true MVP?
@isaacdurazo I don't think we have any need for pausing (plus, I don't think the design can support it--it would need yet another state for collection jobs, e.g. "paused").
Regarding the button labeled "Start Response Collection": what state should the collection jobs be in before a Test Admin presses that button? I'm not aware of a circumstance where a Test Admin would create a Test Plan Run without immediately starting the collection, so this may be an opportunity for simplification. Could the collection begin automatically whenever a Test Plan Run is created and assigned to the collector?
@jugglinmike
Agree with the feedback on pausing and no need for delay of starting the run.
To me, the most natural way to rerun would be from the the report status dialog from data management.
The add to queue button could be in the status column of every row in the above table.
Currently, pressing add to queue in that dialog auto-adds the run to queue and gives a confirmation. We could change that for screen reader/browser combinations that are supported by automation. Pressing add to queue could prompt with the question, "Automatically generate report with automation?. It could show buttons for "Run with bot now", "add to queue to run later", and "Cancel". Pressing either run with bot or add to queue would generate an appropriate confirmation.
@isaacdurazo wrote about the test queue:
When the job is done a "Create run and copy results for ..." dropdown button gets displayed, which allows the admin to assign a human tester to provide verdicts"
Note that the report run is already created, it is in the test queue. It is also possible that independent of the bot run, there are people assigned to execute the run manually. They could have been assigned before or during the bot run.
When the bot is done, under the bot's name (we have to name it) in the testers column, it should say "Responses for x of y tests recorded. Verdicts for x of y assertions assigned." In the report status column, a button for "FinishBOT_NAME run" should be present. (more later for why in report status and not actions). Pressing this button would open a menu with options for:
Assign to and delete are obvious functionality. Mark as finished would be disabled unless all of the following conditions are met:
Assign to should not show names of people who meet all the following conditions:
Alternatively, if the admin tries to assign a person who meets both of those conditions, we could prompt to overwrite their manually recorded results. I think that is complicated and could create problems. I'd rather avoid that. If the admin wants to assign someone who already has results recorded for the run, the admin can first delete that person's results from the queue and then assign that person to finish the bot run.
So, why put the "finish" button in the report status column? Mostly because its presence is dependent on specific conditions and is temporal. It's presence is a form of status information, and I don't want to have to read the actions column or the testers column to know if it is available.
Other information that should be in the status column is whether conflicts are present. We desperately need some improvements to the status column that are not related to automation. I will be creating a separate issue for that.
For clarity, I'm thinking the bot could be named "AT_Name Bot", e.g. "JAWS Bot" or VoiceOver Bot". This naming convention could be particularly useful in a variety of contexts.
The runner part sounds good to me.
Thanks, @mcking65! When we designed this proposal internally, we tried to honor your vision of treating the automation system like a user. Much of my response below is geared toward reducing the amount of bot-specific functionality so that that system more seamlessly integrates with the existing workflow. This, in turn, will reduce cognitive load for users and align more closely with the project's Working Mode.
- Assign to ...
We expect that it would take about the same amount of effort (if not slightly less) to implement Test Plan Run reassignment for any user--human or otherwise. Although we haven't witnessed a need for this, we identified some plausible use cases (e.g. recovering partial work from an unresponsive Tester).
I normally wouldn't advocate for features based on theoretical need, but given the implementation reality and given that the resulting UI would be more general-purpose, I think it's worth considering.
- Mark as finished.
What does it mean to "Mark [a Test Plan Run] as finished"? Is this a status that is specific to Test Plan Runs assigned to a bot user?
- Delete
Can we support the deletion of these Test Plan Runs using the existing button in the "Actions" column (the one labeled "Delete for...")?
For clarity, I'm thinking the bot could be named "AT_Name Bot", e.g. "JAWS Bot" or VoiceOver Bot". This naming convention could be particularly useful in a variety of contexts.
Could you elaborate on some of those contexts? I'm asking because this does not reflect the system architecture, where there will be only one queue for all work, regardless of AT. The disconnect risks giving users an inaccurate impression about resource availability. For instance, they may incorrectly expect that work assigned to one bot user will not impact the delivery of work assigned to another bot user.
Hi @mcking65 tagging you for the review of Mike's message above. Thanks
- Assign to ...
We expect that it would take about the same amount of effort (if not slightly less) to implement Test Plan Run reassignment for any user--human or otherwise. Although we haven't witnessed a need for this, we identified some plausible use cases (e.g. recovering partial work from an unresponsive Tester).
I normally wouldn't advocate for features based on theoretical need, but given the implementation reality and given that the resulting UI would be more general-purpose, I think it's worth considering.
I understand the theoretical use case. In the long term, it would be nice to have the ability to re-assign to eligible users. So, if that can be a serendipity, that's great. Note, the logic I described for eliminating specific users would be an essential element of such a feature.
Nonetheless, that does not change my recommendation for a clear CTA to finish bot runs in the report status column. As long as the "FinishBOT_NAME run" button is present in the report status column, the "Mark Final" button cannot be present. The information the admins need regarding what path to follow with the bot run is in the report status column, so that is where the CTA should be.
My objective here is to give admins a clear process where the information indicating need for action and the CTA to take it are always both in the same place. It's more than a little annoying to backtrack to a different place in the table to take the action you need to take after having read the information that calls for that action. This also gives the ability to make different views of the data in the tables down the road without complicating the experience for the admins.
The "assign to ..." option in the "FinishBOT_NAME run" menu would just be another entry point to the re-assignment logic that would be available in the testers column assuming we serendipitously add that too.
- Mark as finished.
What does it mean to "Mark [a Test Plan Run] as finished"? Is this a status that is specific to Test Plan Runs assigned to a bot user?
Nop. The test plan runner already has a finish button. It gives the tester the ability to say that I think I'm done. We would give the admin the ability to make that judgment for the bot.
- Delete
Can we support the deletion of these Test Plan Runs using the existing button in the "Actions" column (the one labeled "Delete for...")?
Yap, it can be there too. But, again, having a clear CTA for the admin so they can follow a straight line process is the goal.
For clarity, I'm thinking the bot could be named "AT_Name Bot", e.g. "JAWS Bot" or VoiceOver Bot". This naming convention could be particularly useful in a variety of contexts.
Could you elaborate on some of those contexts? I'm asking because this does not reflect the system architecture, where there will be only one queue for all work, regardless of AT. The disconnect risks giving users an inaccurate impression about resource availability. For instance, they may incorrectly expect that work assigned to one bot user will not impact the delivery of work assigned to another bot user.
I'm not the least bit worried about such impressions, especially since there will be a day where we can run multiple instances of the bot in parallel.
Sure, we could give the bot a more generic name, and people can assume that the bot does whatever it needs to do. I am just thinking that there will be contexts where we are making comparisons across implementations and having specific names that don't require reading around the bot name to identify which AT is being driven would be helpful. It would even add more clarity to our existing run history. Also, we may want to make the name different depending on whether the implementation used is the native AT driver or the generic AT driver simulation. Given this week's conversations, I'd like to rehash what we call that generic wrapper that simulates responses from a native API implementation.
Got it, @mcking65. Isaac and I reviewed your requests, and we'll have a new iteration ready for review soon.
Regarding this detail from your original feedback:
Assign to should not show names of people who meet all the following conditions:
- They are already assigned to the run.
- They have executed at least 1 test in the run.
For this initial iteration, would it be acceptable to simply use the first of those two conditions? In other words: even if a Tester has not yet started testing, the Test Admin would have to explicitly delete their Test Plan Run before assigning the bot's Test Plan Run to them.
Here are the updated mockups as well as some new ones. I'm also adding notes here for the changes that were made.
Test plan report status dialog
Test Plans that can be run with the Automation bot will display an "Add to Test Queue" button
Add Test Plan to Test Queue from Test Plan Report Status dialog
After clicking the "Add to Test Queue" button" in the "Test plan report status dialog", the user will be prompted with another dialog with the following characteristics:
Test Queue
Finish Bot run dialog
After clicking the "Finish AT_Name Bot run" button in the Test Queue, the user is prompted with a dialog with the following characteristics:
Hi @mcking65 Isaac has updated the mockups. Please let us know if you have any comments or if we have green light to move forward with the implementation. Thank you
@isaacdurazo
That all sounds good. I have one clarifying question.
Here are the updated mockups as well as some new ones. I'm also adding notes here for the changes that were made. Add Test Plan to Test Queue from Test Plan Report Status dialog
After clicking the "Add to Test Queue" button" in the "Test plan report status dialog", the user will be prompted with another dialog with the following characteristics:
- Heading: Adding TEST_PLAN_NAME Test Plan to the Test Queue
- Body: Would you like to generate a report with automation for this Test Plan automatically?
- Actions: Cancel, Add, and run later and Add and generate a report
Is this 3 actions?
Sorry for the confusion, @mcking65 . The three proposed actions are:
Thank you @isaacdurazo, I understand now.
I'd like to suggest that option 3 be named to provide clarity that the automation bot is doing the generating. What it generates may not be a complete report. Maybe ...
Add and run with bot
Add as bot run
Add bot run
I like your first suggestion, @mcking65 - Add and run with bot
@mcking65 We've learned through the ongoing implementation work that the constraints of the job execution environment present significant hurdles to supporting a "retry" operation on a per-test basis. This, in turn, has changed our thinking on the feature's desirability to end-users. It's difficult to imagine a scenario where a Test Admin would want to retry some incomplete tests but not others (and it's also unlikely that anyone would enjoy requesting retries for one test at a time).
We've been considering some modifications to the workflow that would better match the limitations of the environment and the needs of the user. The first four steps are supported by the design as proposed so far; only the final two steps reflect our new thinking
Here are the design changes we'd need to make in order to realize that:
Does that make sense? Do you think it's acceptable for the MVP?
@jugglinmike wrote:
- The Test Admin notices that the response collector appears to have stalled
- (CHANGE) The Test Admin makes the Test Plan Run as "finished", causing ARIA-AT App to change the status of the incomplete tests to "cancelled"
If the bot job is running and stalled, and the admin doesn't want to wait, this seems like a "Stop Running" function rather than a "finished" function.
- (CHANGE) The Test Admin requests that the response collection system retries the cancelled tests only
Love this idea.
Here are the design changes we'd need to make in order to realize that:
- remove the "Cancel" and "Restart" buttons which are associated with each individual test on the Test Plan Run page
If the cancel button is gone, how does the admin stop the bot? Perhaps just rename the cancel button to "Stop Bot_Name Run".
- add a button labeled "Retry failed tests"
Shouldn't the button be named "Retry Cancelled Tests" instead of "Retry Failed Tests"? Failed means that the AT didn't pass the test.
next to the button labeled "Finish BOT_NAME run" which is disabled until the Test Plan Run has been marked as "Finished".
Why not let the admin be satisfied with the partial bot run? The Admin should be able to "Finish the BOT_NAME run". In this case, any test that was "Cancelled" is now just an incomplete test.
Pressing this new button removes the "finished" status from the run and creates a new collection job which consists only of the cancelled tests
Excellent.
Does that make sense? Do you think it's acceptable for the MVP?
Almost seems like an improvement. I think you are right that it will be the rare case that an admin will want to run the bot for a single test.
@jugglinmike wrote:
- The Test Admin notices that the response collector appears to have stalled
- (CHANGE) The Test Admin makes the Test Plan Run as "finished", causing ARIA-AT App to change the status of the incomplete tests to "cancelled"
If the bot job is running and stalled, and the admin doesn't want to wait, this seems like a "Stop Running" function rather than a "finished" function.
I think this will work, but I have a question regarding if/when we disable these buttons--see below.
- (CHANGE) The Test Admin requests that the response collection system retries the cancelled tests only
Love this idea.
Great!
Here are the design changes we'd need to make in order to realize that:
- remove the "Cancel" and "Restart" buttons which are associated with each individual test on the Test Plan Run page
If the cancel button is gone, how does the admin stop the bot? Perhaps just rename the cancel button to "Stop Bot_Name Run".
Sure, but to be clear: that action will be for the entire Test Plan Run, whereas we designed the button I was referencing for each individual Test. We're still seeking to remove that.
- add a button labeled "Retry failed tests"
Shouldn't the button be named "Retry Cancelled Tests" instead of "Retry Failed Tests"? Failed means that the AT didn't pass the test.
Yes, "cancelled" is the correct term here; sorry to have been imprecise.
I think it would be still better to say "Retry cancelled collections" since that reinforces the idea that the automation doesn't "run tests" (a process which involves verdict assignment) but rather "collects responses."
That said, I'm happy to go with your preference on this.
next to the button labeled "Finish BOT_NAME run" which is disabled until the Test Plan Run has been marked as "Finished".
Why not let the admin be satisfied with the partial bot run? The Admin should be able to "Finish the BOT_NAME run". In this case, any test that was "Cancelled" is now just an incomplete test.
This gets at my question about if/when we disable these buttons.
Your proposal is a much more explicit mapping of user intent to user interface, but the extra button makes for additional states that need accounting.
Pressing this new button removes the "finished" status from the run and creates a new collection job which consists only of the cancelled tests
Excellent.
Does that make sense? Do you think it's acceptable for the MVP?
Almost seems like an improvement. I think you are right that it will be the rare case that an admin will want to run the bot for a single test.
Awesome!
@mcking65 It's been a long time since we attempted to describe the design holistically, so (after conferring with Isaac), I'm going to write a spec for for the automation UI in ARIA-AT App that reflects all of our conversation to date.
Isaac's most recent design included two buttons pertaining to "finishing" a bot run. The first is rendered directly in the Test Queue with the label "Finish AT_Name Bot run." Activating that opens a dialog which includes a dropdown for re-assignment, a button for deleting the run, and a button labeled "Mark as finished." While the redundancy in labeling was somewhat awkward even back then, I think our latest refinements (which add more buttons) warrant a more generic label. Below, I'm proposing "Manage AT_Name Bot run".
No change from 2023-08-31.
completedAt
value to the current time (this is how the system tracks the "finished" state)completedAt
value (this is how the system tracks the "finished" state)mailto:
URLThank you @jugglinmike, it is really helpful to have the full spec consolidated for review.
Test plan report status dialog
No change from 2023-08-31.
Button labeled "Add to Test Queue"
- Location: the cells in the column labeled "Report Status"
- Present when: the row's Test Plans can be run in automation (only NVDA at the time of writing)
This button needs to be present on all rows even if automation is not available. If automation is not available, it just opens the standard confirmation dialog for adding a manual test run. If automation is available, it opens the experience described below that can either add a manual test run or an automation run. I think I may have opened a separate issue related to this need for the button to always be there.
Enabled when: always enabled
Behavior: opens a dialog titled "Adding TEST_PLAN_NAME Test Plan to the Test Queue"
- Dialog with heading, "Adding TEST_PLAN_NAME Test Plan to the Test Queue"
Modal: no
These dialogs will be more accessible if they are modal. I can't think of a reason for not making it modal.
Contents
- text: "Would you like to generate a report with automation for this Test Plan automatically?"
This text gives the impression that there is only one action -- run automatically, but there are two possible actions. Also, the text does not help users confirm they pressed the right button because it does not indicate the browser and AT that have been chosen.
I suggest:
Text: "Choose how the report for AT_NAME and BROWSER_NAME will be generated. Add it to the queue so it can be assigned to a tester at a later time or start running automated response collection with BOT_NAME."
Test Queue
Button labeled: "Manage [AT_Name] Bot run"
- Location: the cells in the column labeled "Report Status"
- Present when: the row's Test Plan includes a bot-assigned Test Plan Run
I think this should be only when it includes a bot run that is either not finished or not re-assigned to a person. See my comments below about not needing an "unfinish" action.
Buttons should only be present in the report status column when the presence of the button communicates status. In other words, when the current status makes a specific forward-progress action relevant, we want a button that enables that action. If the current status doesn't support moving the process forward, then there should not be an action button in the status column.
Prose describing the status of the collection
- Text: "Responses for x of y tests recorded. Verdicts for x of y assertions assigned."
- Location: the column labeled "Report Status"
Previously we had this text in the testers column next to the bot run assignment. This is the status for just the bot, not for the overall report, so it seems like it belongs there.
As an aside, I think that we should update the text in report status column on the test queue page to use the same status strings we are using in the status column of the table in the report status dialog. It is much more informative. It would enable the test admin to scan down just that column and have a better understanding without having to look at all the detail in the testers column. Should I raise a separate issue for this?
- Present when: the row's Test Plan includes a bot-assigned Test Plan Run
I assume that if a bot run is complete and assigned to a person, those prose go away b/c that run is now a human run; at that point, there is no longer a bot run.
Dialog with heading "Manage [AT_Name] Bot run"
- Modal: no
Again, it would be helpful if it were modal.
Contents
Select-only combobox labeled "Assign to..."
- Options: the account names of other testers
- Enabled when: always enabled
- Behavior: re-assigns the Test Plan Run
Once reassigned, the bot should no longer be listed as a tester in the testers column and the manage bot run functionality should go away.
* Button labeled "Retry cancelled collections"
"Collections" doesn't feel like the right noun. I know we call it response collector, but the term "collections" doesn't feel like a clear representation of what was cancelled. I think "Retry cancelled jobs" or "retry cancelled collection jobs".
* Button labeled "Mark as not finished" * Present when: the Test Plan Run is in the "finished" state * Enabled when: always enabled * Behavior: unsets the Test Plan Run's testResults' `completedAt` value (this is how the system tracks the "finished" state)
I don't think we need this function. If the run is marked finish, we can treat it just like a finished run that was performed by a human. We can edit it, re-assign it, or delete it. We don't need a special "unfinish" button for bots, and I don' see it being generally useful for runs completed by humans.
If a bot run is marked finished, the "manage bot run" button should disappear, and that run is now just another run; it does not require any special functionality.
* Button labeled "Delete" * Present when: always present * Enabled when: always enabled * Behavior: destroys the Test Plan Run
If we have this function, it should prompt for confirmation. If confirmed, both the confirmation dialog and the manage bot run dialog go away, and the focus should be set someplace useful in that row of the test queue, probably on the first focusable element in the actions column. That said, this is not a truly necessary function since we should be able to delete from the actions column.
Thank you, @mcking65! I have applied most of your feedback to create a new version of the UI description:
You'll note that some of your requests are not reflected above. Following my explanations, please find a set of four questions regarding the uncertainty that remains.
As an aside, I think that we should update the text in report status column on the test queue page to use the same status strings we are using in the status column of the table in the report status dialog. It is much more informative. It would enable the test admin to scan down just that column and have a better understanding without having to look at all the detail in the testers column. Should I raise a separate issue for this?
Yes, please!
- Present when: the row's Test Plan includes a bot-assigned Test Plan Run
I assume that if a bot run is complete and assigned to a person, those prose go away b/c that run is now a human run; at that point, there is no longer a bot run.
That's right--the phrase "bot-assigned Test Plan Run" is just a more verbose way of saying of "bot run."
Once reassigned, the bot should no longer be listed as a tester in the testers column and the manage bot run functionality should go away.
Bear in mind that Testers have the ability to "un-assign" themselves. In the current system, this action causes the Test Plan Run to be lost. Would you be comfortable allowing Testers to discard bot-collected AT responses like this? One alternative (which @stalgiag has already explored in her work-in-progress) would be to assign the Test Plan Run back to the bot. This behavior would tend to support an ability for Admins to explicitly re-assign back to the bot.
I don't think we need this function. If the run is marked finish, we can treat it just like a finished run that was performed by a human. We can edit it, re-assign it, or delete it. We don't need a special "unfinish" button for bots, and I don' see it being generally useful for runs completed by humans.
Even in the absence of intentional workflow, this button has value in that it reduces the impact of accidents. It's a more robust solution than asking the user to confirm the "mark as finish" action (which I will recommend if we do not include the button).
If a bot run is marked finished, the "manage bot run" button should disappear, and that run is now just another run; it does not require any special functionality.
When a bot-assigned Test Plan Run is marked as "finished," it still requires the "Assign to..." button available in this dialog. More generally, given these two means of distinguishing Test Plan Runs:
...I think the former is a more intuitive delineation both for the system internals and the end-user.
If we have this function, it should prompt for confirmation. If confirmed, both the confirmation dialog and the manage bot run dialog go away, and the focus should be set someplace useful in that row of the test queue, probably on the first focusable element in the actions column. That said, this is not a truly necessary function since we should be able to delete from the actions column.
Great! That was my feeling, as well.
Dialog with heading "Manage [AT_Name] Bot run" Select-only combobox labeled "Assign to..." Options: the account names of other testers
I suggest that be "Options: the account names of other testers who are not already assigned to another run of the same test plan with the same AT and browser"
When a bot-assigned Test Plan Run is marked as "finished," it still requires the "Assign to..." button available in this dialog.
I am not sure why that would be. If it is finished, what does a human need to do?
If a bot does all the work, will we show the bot as the tester in the run history? I think we should. It seems strange that we would have the bot do all the work and then assign it to a person who doesn't do anything.
Sometimes an admin needs to edit a "finished" run. We do that by choosing "run as another tester". Should we allow the admin to pose as the bot? Would it be a problem if the run history showed that work was done by the bot if that work was revised by a human?
If we want bot runs to be attributed to a human instead of the bot, then one option is to auto-assign the finished run to the account of the admin who marked it finished.
I have a minor concern that we are building a system that may not be accurately tracking the providence of work. I don't know that is a real problem; just raising it for consideration.
Remaining questions
When a Tester un-assigns themselves from a Test Plan Run which was previously assigned to a bot, should the system:
- Discard the Test Plan Run (i.e. the current behavior for fully-human-owned Test Plan Runs)
- Re-assign the Test Plan Run to the bot
- Do something else
I didn't realize a tester could "delete" their work by "unassigning" themselves. I should test that out. The function should be named "delete", not "unassign".
If a bot run is assigned to a person, and that person deletes it, yes, it should be deleted. Just because some of the data was generated by a bot, it is not more or less valuable or meaningfully different from work done by a person. In fact, it is much easier to regenerate. So, I have no concerns with the current "unassign" function working the same way all the time regardless of the providence of the data. Make the system behavior consistent; do not build special conditions into the code because of how some of the data was generated.
- Should it be possible for Test Admins to re-assign Test Plan Runs back to a bot?
I don't see value in that. Sounds complicated and unnecessary. Just do another run with the bot.
When a Test Admin marks a bot-assigned Test Plan Runs as "finished", would you prefer:
- They are prompted with a confirmation dialog
- They are presented with a button labeled "Mark as not finished"
- Both of the above
- None of the above
Option 1.
- Do you agree that the button labeled "Manage AT_Name Bot run" must be available even after the bot-assigned Test Plan Run has been marked as "finished"?
No; I don't think there is a need for any of that functionality. I think the button should be removed from the report status column and not appear elsewhere.
In general, it seems like there might be some value in having an admin re-assign function for runs. That is, if the user is an admin, the names of assigned testers in the testers column would be buttons that open the list of people who could be assigned to that run, i.e., all the testers who are not already assigned to another run of the same test plan with the same AT and browser. I don't think this is a necessary function that has to be part of this work though. It could be a follow-on optimization that gives admins more flexibility when work is incomplete and blocking progress. Today, the only real option is delete.
Thanks, @mcking65! If you're comfortable discussing the provenance issue elsewhere, then I think we have reached a final version.
Dialog with heading "Manage [AT_Name] Bot run" Select-only combobox labeled "Assign to..." Options: the account names of other testers
I suggest that be "Options: the account names of other testers who are not already assigned to another run of the same test plan with the same AT and browser"
Understood
When a bot-assigned Test Plan Run is marked as "finished," it still requires the "Assign to..." button available in this dialog.
I am not sure why that would be. If it is finished, what does a human need to do?
Nothing further; I was mistaken (sorry for the confusion; I misinterpreted the meaning of the term "finished" in this context).
If a bot does all the work, will we show the bot as the tester in the run history? I think we should. It seems strange that we would have the bot do all the work and then assign it to a person who doesn't do anything.
Sometimes an admin needs to edit a "finished" run. We do that by choosing "run as another tester". Should we allow the admin to pose as the bot? Would it be a problem if the run history showed that work was done by the bot if that work was revised by a human?
If we want bot runs to be attributed to a human instead of the bot, then one option is to auto-assign the finished run to the account of the admin who marked it finished.
I have a minor concern that we are building a system that may not be accurately tracking the providence of work. I don't know that is a real problem; just raising it for consideration.
My personal opinions about transparency notwithstanding, it's been my understanding (shared in July) that we sidestepped this issue with the social contract that the human who endorses a bot-generated Test Run does so completely--for all the AT responses and all the verdicts.
If we're not comfortable with this contact, then the conversation may exceed the scope of this UI design since it raises questions like:
Remaining questions
When a Tester un-assigns themselves from a Test Plan Run which was previously assigned to a bot, should the system:
- Discard the Test Plan Run (i.e. the current behavior for fully-human-owned Test Plan Runs)
- Re-assign the Test Plan Run to the bot
- Do something else
I didn't realize a tester could "delete" their work by "unassigning" themselves. I should test that out. The function should be named "delete", not "unassign".
Understood. Here's an issue to track that.
If a bot run is assigned to a person, and that person deletes it, yes, it should be deleted. Just because some of the data was generated by a bot, it is not more or less valuable or meaningfully different from work done by a person. In fact, it is much easier to regenerate. So, I have no concerns with the current "unassign" function working the same way all the time regardless of the providence of the data. Make the system behavior consistent; do not build special conditions into the code because of how some of the data was generated.
Understood
- Should it be possible for Test Admins to re-assign Test Plan Runs back to a bot?
I don't see value in that. Sounds complicated and unnecessary. Just do another run with the bot.
Understood
When a Test Admin marks a bot-assigned Test Plan Runs as "finished", would you prefer:
- They are prompted with a confirmation dialog
- They are presented with a button labeled "Mark as not finished"
- Both of the above
- None of the above
Option 1.
Understood
- Do you agree that the button labeled "Manage AT_Name Bot run" must be available even after the bot-assigned Test Plan Run has been marked as "finished"?
No; I don't think there is a need for any of that functionality. I think the button should be removed from the report status column and not appear elsewhere.
In general, it seems like there might be some value in having an admin re-assign function for runs. That is, if the user is an admin, the names of assigned testers in the testers column would be buttons that open the list of people who could be assigned to that run, i.e., all the testers who are not already assigned to another run of the same test plan with the same AT and browser. I don't think this is a necessary function that has to be part of this work though. It could be a follow-on optimization that gives admins more flexibility when work is incomplete and blocking progress. Today, the only real option is delete.
Understood
I think this is ready to go forward. In the meantime, there are couple of in the weeds details that may need clarification.
Test Queue
Dialog with heading "Manage [AT_Name] Bot run"
Button labeled "Retry cancelled collection jobs"
- Present when: always present
- Enabled when: all collection jobs are "complete" or "cancelled"
If all jobs are complete, it does not seem like this button should be enabled. Shouldn't it only be enable when "at least one collection job is canceled"?
* Button labeled "Mark as finished" * Present when: always present * Enabled when: all collection jobs are "complete" and all assertions have a verdict assigned * Behavior: opens a dialog titled "Are you sure you wish to mark [AT_Name] Bot run as finished?"
Dialog with heading: "Are you sure you wish to mark [AT_Name] Bot run as finished?"
Modal: yes
Contents
Button labeled "Yes"
- Present when: always present
- Enabled when: always enabled
- Behavior: sets the Test Plan Run's testResults'
completedAt
value to the current time (this is how the system tracks the "finished" state), closes the current dialog, and closes the dialog titled, "Manage [AT_Name] Bot run"
Given that we don't want the "Manage [AT_Name] Bot run" to appear after a bot run is finished, I see a potential problem. Here are two options. I think option B is better.
Option A: Perform one more action when the "yes" button is activated. That action would be "assign the collection job to the current user, i.e., the user who activated the "yes" button. Otherwise, there is still a run assigned to the bot in the testers column. With this spec as written, if there is a bot assigned to a run in the testers column, then the report status column will still contain the "Manage [AT_Name] Bot run" button.
Option B: Change the condition for presence of the "Manage [AT_Name] Bot run" button to "the row's Test Plan includes an unfinished bot-assigned Test Plan Run".
These mockups describe the minimum UI and UX features for the ARIA AT app to support AT Automation.
Test Queue
Visual Mockup
Text-based Mockup
From this page, a Test Admin is able to start a Response Collection Job. The Response Collector Bot should first be assigned to a Test Plan the same way we currently assign human testers. Once the Bot appears under the Testers column, the action column will display the button "Start Response Collection", which when triggered will start the Collection Job. Once the Job gets started, this button will turn into a "Pause" button. When the job is done a "Create run and copy results for ..." dropdown button gets displayed, which allows the admin to assign a human tester to provide verdicts
This Design also suggests the display of different statuses under the Report status column, such as X Canceled, X Errors, X Skipped, X Complete, and X Queued. Not necessarily in that order
Test Plan Run page
Visual Mockup
Text-based Mockup
This design re-uses the existing structure and layout of the Test Run Page. On the left-hand side, we repurposed the timeline to display different statuses related to the response collection job. The statuses and their icons are:
On top of the main Test Area, we are repurposing the toast that currently reads "Reviewing tests of USERNAME" to indicate that one is reviewing the Responses of the Collection Bot. This design suggests: "Reviewing tests of Response Collector Bot"
The main Test area is read-only and it will display the Bot responses collected during the job in the text area currently dedicated to the AT output
The assertions section is where the tester will provide the verdicts but it will be until the tester gets assigned that they are able to modify these.
Lastly, on the right-hand side, we have repurposed the Test Options buttons to have the following actions: