Closed wesrowe closed 1 year ago
Noting: Qualtrics is very expensive and Id on't think we'll be able to purchase access in our T&M for just this use case. We know of the Long Covid team (working with Marian Adly, marian.adly@gsa.gov / marian.adly@va.gov ) is using Qualtrics and we've reached out about whether that's set up in such a way that we could be added.
Posted in DSVA slack channel #research-ops: https://dsva.slack.com/archives/C0216PL32HJ/p1685126827073229. I should hear back from Shane Strassberg (out today) or Robyn Singleton next week.
I also posted on an Ad Hoc channel, and the response so far from everyone was to use Qualtrics as I originally suggested. I just found a free survey maker on Qualtrics’ website that seems to have decent capabilities, but we won’t know for sure until we try it out.
Other points from the Ad Hoc channel:
Cindy and I wrote #13965 as a precursor, as we believe a high-level understanding of the research approach and key questions for research will drive requirements for the prototyping tool.
If we can't get a proper Qualtrics license to use, I’d like to push for asking to use the free version of Qualtrics but without saving any Veteran data--by having the participant not submit their responses at the end OR just viewing the survey in Preview mode (assuming that the logical branches works correctly). @wesrowe
Qualtrics thread for the histories: https://dsva.slack.com/archives/C52CL1PKQ/p1685548803568559
See my recommendations on prototyping and recruiting in this ticket
Update from Danielle T re the development of the content as of 6/26: (via Slack DM to Wes, Cindy)
Updates from her meeting today:
Open questions that I'm still wrestling with (you'll see notes on these in the MURAL):
Draft inputs for determining prototype approach between Optimal Workshop and Codepen:
Calculations (for CP vs Staging vs OW):
Notes from the meeting with Liz Lantz:
CodePen has free and paid accounts.
Free includes standard markup in the header that will be visible during usability tests.
Paid accounts = https://codepen.io/accounts/signup
If we opt for CodePen, we'll need to work out whether our team should sign up for its own instance & can bill back to VA, which requires PO / COR sign off, or whether we can just add seats to the Check In acct (trickier for managing invoicing, probably not preferred.)
Noting: CodePen is not listed in the FedRamp marketplace, but has been used by both Liz's team and Check In Experience for usability testing, so we are not concerned about that currently.
@wesrowe Danielle has removed the "served outside the US" question and so now I believe that the only branching is based on the single-select question about service years. Therefore, it seems that there are NO mathematical comparisons of years after the first question. That is, based on their response to the very first question, there is a fixed flow of questions for each of the 3 options (though some sequences of questions end with the first “yes” response). If so, then I believe that would be very easy to build that branching logic in Optimal Workshop and not have to ask research participants any extra questions (other than the few that result from removing the “serve outside the US” question).
If the above is true, then I think we should prototype in Optimal Workshop and save CodePen to prototype another product whose user flow doesn’t look like exactly a survey.
UPDATE: Danielle confirmed this. "Participant chooses 1 of these 3 options and then there are 3 corresponding pathways depending on the options."
Note that if we use Optimal Workshop’s survey tool as a prototype, the results screens will need to be either plain text in Optimal Workshop, or possibly a web page that would open at the end of the survey (which would need to be mocked up). I don’t remember whether Optimal Workshop’s survey tool lets you open a specified URL at the end of the survey, or whether I’d need to open such a page separately. I'll need to find out.
UPDATE: I just played around with Optimal Workshop’s survey tool, and upon submitting the survey, ALL participants see the same “thank you” message (which we can write) or URL (which we can specify to go to without showing the “thank you” message). This means that we can’t show different results screens depending on question flow (by date range) or their answers to the questions. So, if we want to get user feedback on multiple results screens, we’ll have to mock them up separately for me to show the correct one when they’re done answering all the questions that pertain to them.
One issue with implementing the questions in Optimal Workshop: For Veterans who served in both time periods, we can't branch to one set of questions and then to another set, so we may have to duplicate all the questions in both time periods to make the third branch.
UPDATE: Wes recommended working on each of the question sets and finalizing them before copying them both sets of questions into the third branch.
For the results screens, we could take everyone to the same URL that shows generic-looking links (e.g., A, B, or C) to the different results screens, and then I (the moderator) would ask the participant to click on the link that corresponds to the appropriate results screen based on the their responses to the questions. Alternately, we might be able to mock up a results screen as a "question" that would show the result screen, but this would only work if the results are from only the previous question, which is unrealistic.
Randi, Cindy and Wes met on 6/27 to go over what we learned about Codepen and SWAG effort estimates for front-end developer contributions to both prototyping and the actual app. Analysis below.
Notes:
We can code a prototype in either VAgov Staging or CodePen. CodePen is essentially an environment. Using the VA Design System components in either environment will make accessibility much easier to achieve.
Effort to code prototype:
Review cycles for coded prototype: ~1-2 sprints (including code iterations)
Pros of CodePen:
Cons of CodePen:
With Danielle's simplification of the questions and flow, we are confident that OW can be made to work with very little need for workarounds. Cindy wrote up findings from her quick PoC in a comment above this one. She would most likely want to build the results screen(s) outside OW, but she is confident she could do that easily and without impacting the quality of the research results.
Effort to build prototype:
Review cycles for OW prototype (2-4 people): 1-1.5 sprints (including prototype updates)
Pros:
Cons:
If we code a prototype: we recommend going ahead to VAgov Staging. Similar build timeline to CodePen, but overall effort saved, as we would essentially be building the final app code.
Future: CodePen offers some advantages over the VAgov platform in some scenarios, which were described by Liz Lantz in our meeting (rapid build time, non-devs can access code, possible personalization use case leveraging those advantages). The risks related to importing Design System components / a11y build and testing, and OptimalWorkshop's built in accessibility and off-the-shelf survey functionality lead us to prefer OW.
@davidconlon, please see the comment above for the team's recommendation on PACT Act prototyping.
OK. this seems like another analysis. Is the decision that we go with Optimal workshop?
Yes, OW will save us a couple months potentially. Closing.
Danielle has updated the question logic to include more branching based on multi-select (checkbox) questions, which Optimal Workshop doesn't support. We could ask research participants to check only one option instead of multiple, but I'd prefer to instead use a more powerful survey tool to make a more realistic prototype for research. Survey Monkey, which Agile 6 already has licenses for, supports branching based on multi-select questions and also the creation of rules for deciding when/where to jump to. Jo Agnitti is currently using Survey Monkey and confirmed this. I've asked her more questions about the survey capabilities in this slack thread.
Today I requested a Survey Monkey license from Tony Arashiro in this slack thread so I could try it out, but he said "You would need to go through your delivery manager. They would need to determine if it is in the materials budget, provide justification, and provide the number of licenses along with total cost. Once that is approved I can procure the license for the team and provide access...we need to have justification from our team and the COR needs to approve it along with the other items mentioned." So I'd like to talk with @jilladams about this ASAP because I may have time to build the prototype this coming week while my VBA research is still waiting for content/design. FYI @wesrowe .
Today @wesrowe and I met with Joann Ignitti to learn more about Survey Monkey (SM). Below are my notes from the meeting.
@jilladams @wesrowe I'd like to proceed with a request for a SM license for the following reasons:
I'm going to research advancing branching in Survey Monkey, which isn't supported by Joann's license, to make sure that will do the second item above: branch based on responses to multiple questions
UPDATE: I believe this is the case. You can define rules of the form If Q1 is X and Q2 is Y then branch to page Z.
@jilladams @wesrowe Here's what I learned about Survey Monkey pricing from https://www.surveymonkey.com/pricing/individual/?ut_source=pricing-teams-summary... I need a license with Advanced survey logic. Options:
And I'd need to license for probably only 2 months:
What else do we need to request this, @jilladams ?
Noted. I'll write it up and send to PO / COR now.
Thank you! Let me know if you have more questions...
Request emailed to @davidconlon and Keith Riley, subject line "Sitewide T&M Material budget: Survey Monkey license, 2 months", 5pm 7/19.
Survey Monkey approved for a single license, for 2 months. If we need beyond 2 months from date of signup, we will need to revisit & re-request approval for an extension.
I'm working with A6 IT on signing up for the account in order to provide Cindy with a login. Closing this ticket.
Noting: Survey Monkey bills annually for this tier / type of license, so we can't get it for 2 months. Discussion open with COR re: whether we can get approved for annual or need to fall back to Optimal Workshop. For assessing the people hours required to use Optimal Workshop instead, @cindymerrill provided a doc https://dvagov-my.sharepoint.com/:w:/g/personal/cynthia_merrill_va_gov/EcVTOmN-ODhCp3Aeq58pxqoBESad8qopm7WFPknHk_Pbmw?e=pV2tHg which we can use to flesh out the remaining answers to that question.
@thejordanwood we'll need a design assessment here re: lift for #14444 or a call from @cindymerrill on whether that can definitely be done in OW vs a mockup), and for #14446. Points / loosely as days is fine, happy to talk through it or @cindymerrill / @wesrowe can to help support.
@thejordanwood The part of my document linked above that's relevant to you is "Someone will need to mock up (low-fi wireframes are fine) the results screens and a web page of links to them all" along with a time estimate for the TBD in "+ TBD days for designer to design multiple results screens and a page with links to them all". For context, see 2 sections in document with headers in RED text.
@jilladams It's #14444 that we'll definitely need Jordan's help on if we use Optimal Workshop. #14446 I might be able to do as the welcome screen in the survey tool.
@thejordanwood might have a hard time estimating her time for mocking up results screens (and home screen) if she can't see the content that goes into them. I just pinged Danielle to request access for Jordan to the Word doc on SharePoint with all the content. Alternately, I could show Jordan in a meeting via screenshare.
UPDATE: SharePoint link from Danielle that should work for Jordan: https://dvagov-my.sharepoint.com/:w:/g/personal/danielle_thierry_va_gov/ESmD5P5nmjhPrE7BmMnMpGMBORL7c_BFQfORmHrpIyFBKw?e=iaEcFG
This document is very complicated, and so it might be helpful for me to explain it to @thejordanwood . Or, you could start with my document where I documented the impacts because the first section in it gives an overview of the high-level logic flow, which includes the results.
BLOCKED on direction of survey tool vs coded prototype (ticket not needed if coding)
Description
User story
AS A Researcher I WANT to attempt to build a proof-of-concept (PoC) version of the PACT Wizard SO THAT we will know for sure whether Optimal Workshop can provide the capabilities we need for a research prototype.
AS A Veteran I WANT to understand the complexity of the question SO THAT I can complete the wizard successfully
PACT Wiz questions mural
Key requirements (at first glance):
If service dates include 1962 - 1975 + "No" to S3.1
Optional tools / eval notes
Effort level and requirements are the differentiator.
Acceptance criteria