LearnersGuild / game-prototype

Lightweight, minimal implementation of game mechanics for rapid experimentation and prototyping.
0 stars 0 forks source link

Distribute SUS questions through retros #99

Closed tannerwelsh closed 7 years ago

tannerwelsh commented 8 years ago

Critical Strategic Goals

What are the benefits of this change, and whom do they impact?

Describe the change, and provide any needed context.


this is a more MVP version of #98

heyheyjp commented 8 years ago

@tannerwelsh: I'd like to challenge the idea that collecting this feedback at the end of every project is the most valuable way for us to measure Echo usability.

We're already asking a heck of a lot of questions.Iin most cases, such as questions about the COS, the space, and their teammates, the survey approach seems to get us the most bang for the buck. The cognitive load placed on the players/learners is justified IMHO. I don't currently believe this to be the case when it comes to usability, because (A) I don't expect the answers to change significantly between surveys because the rate at which new features are developed is slower than the rate at which projects are completed/retros are submitted and (B) we actually do have other - arguably richer, and certainly less subjective/more reliable - data available to us for analysis in the form of trackable actions/events in the web app and in Echo, server logs, already submitted LOS issues and already submitted responses to the usability survey.

Consider that this method of collecting usability data could itself negatively impact usability by exacerbating survey fatigue and whether or not other sources of data might yield more value? :)

heyheyjp commented 8 years ago

That said, I do think that it would be wonderful if there were less friction involved in the process for users providing feedback about issues encountered while using the system. I just feel like we're headed down the wrong path by asking specific multiple choice questions and certainly by prompting for feedback so frequently in the app. I'm getting tired just thinking about it. 😛

I like the idea of having a small control visible at all times in the web app(s) that makes it easy to send feedback. A super simple approach would be to have it open the new LOS issue page in a browser window. To get fancier, we could have the system automatically show a dialog of some sort any time an error is captured, asking that the user optionally provide feedback about the experience they just had. Also, the system might ideally be able to transfer a rich set of information about the context in which the user was using the system any time an error occurs.

tannerwelsh commented 8 years ago

I don't expect the answers to change significantly between surveys because the rate at which new features are developed is slower than the rate at which projects are completed/retros are submitted

Good point. This was partially my reason for splitting up the questions so that only 1 or 2 are answered per player per week.

Also, I'd hope that the pace of feature release increases over time. I'd like to design a measurement system that would work for our ideal release cadence, and then work to increase our agility until a twice-monthly or even weekly feedback cycle makes sense.

we actually do have other - arguably richer, and certainly less subjective/more reliable - data available to us for analysis in the form of trackable actions/events in the web app and in Echo, server logs

Super! What do you think we can learn from this? What information can we extract?

I like the idea of having a small control visible at all times in the web app(s) that makes it easy to send feedback.

Great! Can you make an issue describing how this would work in more detail?

bundacia commented 8 years ago

@tannerwelsh: the way the retro survey is currently implemented it's possible to give different sets of questions to different teams, but not to give different questions to every player.

In other words, we can do this:

OPTION 1:
#project-team-1
  player1: answers SUS #7 and SUS #8
  player2: answers SUS #7 and SUS #8

#project-team-2
  player3: answers SUS #3 and SUS #5
  player4: answers SUS #3 and SUS #5

...

WAY easier than we can do this:

OPTION 2:
#project-team-1
  player1: answers SUS #1 and SUS #2
  player2: answers SUS #3 and SUS #4

#project-team-2
  player3: answers SUS #5 and SUS #6
  player4: answers SUS #7 and SUS #8

...

So if we can tweak these requirements to say that we want each team to get 2 random SUS questions on their survey, this gets a ton easier.

Even option 1 will involve a little work since currently all survey questions are deterministic. So it would be easier still to just add a static set of questions to the survey, so that everyone is answering the exact same ones (call that OPTION 0). I'll leave it to you to decide how valuable the payoff is for each different approach but the cost would be something like:

OPTION 0: hours OPTION 1: days OPTION 2: weeks

tannerwelsh commented 8 years ago

Thanks @bundacia.

Based on developments in the "Usability measured" goal, I'm not sure if this issue will even be needed. Let's freeze this issue until clearer objectives are defined for that goal, and then can determine whether this issue actually leads towards those objectives or not.

tannerwelsh commented 7 years ago

Non-essential, closing. Trusting that tension will re-arise if needed.