department-of-veterans-affairs / va.gov-cms

Editor-centered management for Veteran-centered content.
https://prod.cms.va.gov
GNU General Public License v2.0
99 stars 69 forks source link

PACT Wizard: Validate prototyping tools / capability for our branching requirements #13722

Closed wesrowe closed 1 year ago

wesrowe commented 1 year ago

BLOCKED on direction of survey tool vs coded prototype (ticket not needed if coding)

Description

User story

AS A Researcher I WANT to attempt to build a proof-of-concept (PoC) version of the PACT Wizard SO THAT we will know for sure whether Optimal Workshop can provide the capabilities we need for a research prototype.

AS A Veteran I WANT to understand the complexity of the question SO THAT I can complete the wizard successfully

PACT Wiz questions mural

Key requirements (at first glance):

Optional tools / eval notes

Effort level and requirements are the differentiator.

Acceptance criteria

jilladams commented 1 year ago

Noting: Qualtrics is very expensive and Id on't think we'll be able to purchase access in our T&M for just this use case. We know of the Long Covid team (working with Marian Adly, marian.adly@gsa.gov / marian.adly@va.gov ) is using Qualtrics and we've reached out about whether that's set up in such a way that we could be added.

cindymerrill commented 1 year ago

Posted in DSVA slack channel #research-ops: https://dsva.slack.com/archives/C0216PL32HJ/p1685126827073229. I should hear back from Shane Strassberg (out today) or Robyn Singleton next week.

I also posted on an Ad Hoc channel, and the response so far from everyone was to use Qualtrics as I originally suggested. I just found a free survey maker on Qualtrics’ website that seems to have decent capabilities, but we won’t know for sure until we try it out.

Other points from the Ad Hoc channel:

wesrowe commented 1 year ago

Cindy and I wrote #13965 as a precursor, as we believe a high-level understanding of the research approach and key questions for research will drive requirements for the prototyping tool.

cindymerrill commented 1 year ago

If we can't get a proper Qualtrics license to use, I’d like to push for asking to use the free version of Qualtrics but without saving any Veteran data--by having the participant not submit their responses at the end OR just viewing the survey in Preview mode (assuming that the logical branches works correctly). @wesrowe

jilladams commented 1 year ago

Qualtrics thread for the histories: https://dsva.slack.com/archives/C52CL1PKQ/p1685548803568559

cindymerrill commented 1 year ago

See my recommendations on prototyping and recruiting in this ticket

wesrowe commented 1 year ago

Update from Danielle T re the development of the content as of 6/26: (via Slack DM to Wes, Cindy)

  1. The v1.2 Mural is still current.
  2. The content has not gone to stakeholders yet, pending resolution of questions below.

Updates from her meeting today:

Open questions that I'm still wrestling with (you'll see notes on these in the MURAL):

wesrowe commented 1 year ago

Draft inputs for determining prototype approach between Optimal Workshop and Codepen:

Calculations (for CP vs Staging vs OW):

jilladams commented 1 year ago

Notes from the meeting with Liz Lantz:

CodePen

Pricing

CodePen has free and paid accounts.

Free includes standard markup in the header that will be visible during usability tests.

Paid accounts = https://codepen.io/accounts/signup

If we opt for CodePen, we'll need to work out whether our team should sign up for its own instance & can bill back to VA, which requires PO / COR sign off, or whether we can just add seats to the Check In acct (trickier for managing invoicing, probably not preferred.)

Noting: CodePen is not listed in the FedRamp marketplace, but has been used by both Liz's team and Check In Experience for usability testing, so we are not concerned about that currently.

Tech notes

cindymerrill commented 1 year ago

@wesrowe Danielle has removed the "served outside the US" question and so now I believe that the only branching is based on the single-select question about service years. Therefore, it seems that there are NO mathematical comparisons of years after the first question. That is, based on their response to the very first question, there is a fixed flow of questions for each of the 3 options (though some sequences of questions end with the first “yes” response). If so, then I believe that would be very easy to build that branching logic in Optimal Workshop and not have to ask research participants any extra questions (other than the few that result from removing the “serve outside the US” question).

If the above is true, then I think we should prototype in Optimal Workshop and save CodePen to prototype another product whose user flow doesn’t look like exactly a survey.

UPDATE: Danielle confirmed this. "Participant chooses 1 of these 3 options and then there are 3 corresponding pathways depending on the options."

Note that if we use Optimal Workshop’s survey tool as a prototype, the results screens will need to be either plain text in Optimal Workshop, or possibly a web page that would open at the end of the survey (which would need to be mocked up). I don’t remember whether Optimal Workshop’s survey tool lets you open a specified URL at the end of the survey, or whether I’d need to open such a page separately. I'll need to find out.

UPDATE: I just played around with Optimal Workshop’s survey tool, and upon submitting the survey, ALL participants see the same “thank you” message (which we can write) or URL (which we can specify to go to without showing the “thank you” message). This means that we can’t show different results screens depending on question flow (by date range) or their answers to the questions. So, if we want to get user feedback on multiple results screens, we’ll have to mock them up separately for me to show the correct one when they’re done answering all the questions that pertain to them.

cindymerrill commented 1 year ago

One issue with implementing the questions in Optimal Workshop: For Veterans who served in both time periods, we can't branch to one set of questions and then to another set, so we may have to duplicate all the questions in both time periods to make the third branch.

UPDATE: Wes recommended working on each of the question sets and finalizing them before copying them both sets of questions into the third branch.

cindymerrill commented 1 year ago

For the results screens, we could take everyone to the same URL that shows generic-looking links (e.g., A, B, or C) to the different results screens, and then I (the moderator) would ask the participant to click on the link that corresponds to the appropriate results screen based on the their responses to the questions. Alternately, we might be able to mock up a results screen as a "question" that would show the result screen, but this would only work if the results are from only the previous question, which is unrealistic.

wesrowe commented 1 year ago

Randi, Cindy and Wes met on 6/27 to go over what we learned about Codepen and SWAG effort estimates for front-end developer contributions to both prototyping and the actual app. Analysis below.

Conclusion (tldr)

  1. We would have an accessible prototype significantly faster and cheaper by using an existing, a11y-engineered survey tool instead of building our own bespoke survey tool. Creating the prototype is much quicker, and review cycles would be easier/shorter.
  2. If we code a prototype, we should do it on VAgov Staging, not CodePen, as it would essentially be the final app code. a. We weren't able to see any significant advantages to CP b. There is a risk that CP can't import the VA design system at all (which would add to both development and reviewer burden over using Staging). c. Re-use of any prototype code for the final app also depends on whether the VADS could be imported, but Randi anticipates code re-use to save only 1 sprint even in that hopeful scenario.

Full analysis

Notes:

Coded prototype estimates (very high level)

We can code a prototype in either VAgov Staging or CodePen. CodePen is essentially an environment. Using the VA Design System components in either environment will make accessibility much easier to achieve.

Coded prototype estimates

Effort to code prototype:

Review cycles for coded prototype: ~1-2 sprints (including code iterations)

Pros/Cons of CodePen vs. VAgov Staging

Pros of CodePen:

Cons of CodePen:

Survey Tool prototype in Optimal Workshop

With Danielle's simplification of the questions and flow, we are confident that OW can be made to work with very little need for workarounds. Cindy wrote up findings from her quick PoC in a comment above this one. She would most likely want to build the results screen(s) outside OW, but she is confident she could do that easily and without impacting the quality of the research results.

Optimal Workshop estimates

Effort to build prototype:

Review cycles for OW prototype (2-4 people): 1-1.5 sprints (including prototype updates)

Pros / Cons for OW vs. coded prototype

Pros:

Cons:

Recommendation

1. Optimal Workshop

  1. Build time: can be built by non-engineer
    • Cindy's effort roughly a wash between OW and CodePen/Staging: Extra review on custom code solution cancels out not having to build OW survey; she can build the prototype in OW about as quickly as she could review coded prototype
    • Engineering time saved: 2-3 engineer sprints
    • Prototyping review screens = small unknown, possibly a good case for CodePen if another tool (eg Github) doesn't work
  2. A11y: OW has built in a11y handling. Versus coded prototype:
    • Staging has built-in a11y, with more overall dev effort.
    • CodePen: if we cannot incorporate Design System components, will require custom HTML and a11y testing for each component.
  3. Review cycles: shorter/simpler overall vs. coded prototype, including tighter iteration between prototype and discussion guide.

2. Coded Prototype: VA Staging

If we code a prototype: we recommend going ahead to VAgov Staging. Similar build timeline to CodePen, but overall effort saved, as we would essentially be building the final app code.

  1. Build time: CodePen and Staging builds are comparable, except a11y unknowns.
  2. A11y: Built-in in vets-website / DS.
  3. Review cycles: CodePen / Staging are comparable.
  4. Re-use of any prototype code for the final app: N/A, it is the final app.

3. CodePen

  1. Build time: CodePen and Staging builds are comparable, more a11y unknowns. More dev effort than OW.
  2. A11y: Risk that CP can't import the VA design system, which would add to both development and reviewer burden over using Staging and require custom HTML / a11y testing.
  3. Review cycles: CodePen / Staging are comparable.
  4. Re-use of any prototype code for the final app: Depends on whether the DS could be imported to CodePen. If so, Randi anticipates code re-use to save ~1 sprint.

Future: CodePen offers some advantages over the VAgov platform in some scenarios, which were described by Liz Lantz in our meeting (rapid build time, non-devs can access code, possible personalization use case leveraging those advantages). The risks related to importing Design System components / a11y build and testing, and OptimalWorkshop's built in accessibility and off-the-shelf survey functionality lead us to prefer OW.

wesrowe commented 1 year ago

@davidconlon, please see the comment above for the team's recommendation on PACT Act prototyping.

davidconlon commented 1 year ago

OK. this seems like another analysis. Is the decision that we go with Optimal workshop?

wesrowe commented 1 year ago

Yes, OW will save us a couple months potentially. Closing.

cindymerrill commented 1 year ago

Danielle has updated the question logic to include more branching based on multi-select (checkbox) questions, which Optimal Workshop doesn't support. We could ask research participants to check only one option instead of multiple, but I'd prefer to instead use a more powerful survey tool to make a more realistic prototype for research. Survey Monkey, which Agile 6 already has licenses for, supports branching based on multi-select questions and also the creation of rules for deciding when/where to jump to. Jo Agnitti is currently using Survey Monkey and confirmed this. I've asked her more questions about the survey capabilities in this slack thread.

Today I requested a Survey Monkey license from Tony Arashiro in this slack thread so I could try it out, but he said "You would need to go through your delivery manager. They would need to determine if it is in the materials budget, provide justification, and provide the number of licenses along with total cost. Once that is approved I can procure the license for the team and provide access...we need to have justification from our team and the COR needs to approve it along with the other items mentioned." So I'd like to talk with @jilladams about this ASAP because I may have time to build the prototype this coming week while my VBA research is still waiting for content/design. FYI @wesrowe .

cindymerrill commented 1 year ago

Today @wesrowe and I met with Joann Ignitti to learn more about Survey Monkey (SM). Below are my notes from the meeting.

With SM you CAN:

With SM you CAN'T:

cindymerrill commented 1 year ago

@jilladams @wesrowe I'd like to proceed with a request for a SM license for the following reasons:

What we need for the prototype that SM supports but Optimal Workshop doesn't

  1. branch based on checkbox responses (need for at least 3 questions)
  2. branch based on responses to multiple questions
  3. create pages with text but not a question (use to show different results pages)
  4. "none of the above" option for checkbox questions prevents any other options from being selected
  5. duplicate questions within the survey (saves time)
cindymerrill commented 1 year ago

I'm going to research advancing branching in Survey Monkey, which isn't supported by Joann's license, to make sure that will do the second item above: branch based on responses to multiple questions

UPDATE: I believe this is the case. You can define rules of the form If Q1 is X and Q2 is Y then branch to page Z.

cindymerrill commented 1 year ago

@jilladams @wesrowe Here's what I learned about Survey Monkey pricing from https://www.surveymonkey.com/pricing/individual/?ut_source=pricing-teams-summary... I need a license with Advanced survey logic. Options:

And I'd need to license for probably only 2 months:

What else do we need to request this, @jilladams ?

jilladams commented 1 year ago

Noted. I'll write it up and send to PO / COR now.

cindymerrill commented 1 year ago

Thank you! Let me know if you have more questions...

jilladams commented 1 year ago

Request emailed to @davidconlon and Keith Riley, subject line "Sitewide T&M Material budget: Survey Monkey license, 2 months", 5pm 7/19.

jilladams commented 1 year ago

Survey Monkey approved for a single license, for 2 months. If we need beyond 2 months from date of signup, we will need to revisit & re-request approval for an extension.

I'm working with A6 IT on signing up for the account in order to provide Cindy with a login. Closing this ticket.

jilladams commented 1 year ago

Noting: Survey Monkey bills annually for this tier / type of license, so we can't get it for 2 months. Discussion open with COR re: whether we can get approved for annual or need to fall back to Optimal Workshop. For assessing the people hours required to use Optimal Workshop instead, @cindymerrill provided a doc https://dvagov-my.sharepoint.com/:w:/g/personal/cynthia_merrill_va_gov/EcVTOmN-ODhCp3Aeq58pxqoBESad8qopm7WFPknHk_Pbmw?e=pV2tHg which we can use to flesh out the remaining answers to that question.

@thejordanwood we'll need a design assessment here re: lift for #14444 or a call from @cindymerrill on whether that can definitely be done in OW vs a mockup), and for #14446. Points / loosely as days is fine, happy to talk through it or @cindymerrill / @wesrowe can to help support.

cindymerrill commented 1 year ago

@thejordanwood The part of my document linked above that's relevant to you is "Someone will need to mock up (low-fi wireframes are fine) the results screens and a web page of links to them all" along with a time estimate for the TBD in "+ TBD days for designer to design multiple results screens and a page with links to them all". For context, see 2 sections in document with headers in RED text.

@jilladams It's #14444 that we'll definitely need Jordan's help on if we use Optimal Workshop. #14446 I might be able to do as the welcome screen in the survey tool.

cindymerrill commented 1 year ago

@thejordanwood might have a hard time estimating her time for mocking up results screens (and home screen) if she can't see the content that goes into them. I just pinged Danielle to request access for Jordan to the Word doc on SharePoint with all the content. Alternately, I could show Jordan in a meeting via screenshare.

UPDATE: SharePoint link from Danielle that should work for Jordan: https://dvagov-my.sharepoint.com/:w:/g/personal/danielle_thierry_va_gov/ESmD5P5nmjhPrE7BmMnMpGMBORL7c_BFQfORmHrpIyFBKw?e=iaEcFG

cindymerrill commented 1 year ago

This document is very complicated, and so it might be helpful for me to explain it to @thejordanwood . Or, you could start with my document where I documented the impacts because the first section in it gives an overview of the high-level logic flow, which includes the results.