Closed jackkim9 closed 3 years ago
Hi everyone,
We recently ran pilot sessions as part of the Navigator API Walkthrough Study and I wanted to give you a brief update. The goal of the pilot is to make sure the study design is optimized to answer our research questions, such as:
RQ# | Question | Theme |
---|---|---|
1 | How well the participants can construct an accurate mental model of the API (i.e. how things are related and work together) just by “walking through” some code samples implementing common navigation scenarios? | Accuracy of Mental Model |
2 | How well classes make sense to participants? | Classes and formatting |
3 | How well the names of the functions, classes, methods and/or properties give participants a clear understanding of the API? | Naming |
4 | How well the participants can reason about the app’s navigation behavior by reading its implementation? | Easy of reasoning |
5 | How well the participants can read the code? | Readability |
We ran interviews remotely and each session lasted for 60 minutes. We recruited 3 users who found deep linking important for their apps (Sign up for future studies!). They self-reported themselves as novice, intermediate, and advanced users of Flutter, respectively.
We used VRouter code snippets for the pilot. The package author had already submitted some snippets around our key scenarios and they were code reviewed by our team members. Among the snippets that are merged in our repo, we focused on Deep Linking with Path Parameters and Dynamic Linking, or the ability to create deep links on demand. (We are currently planning to include Deep Linking, Nested Routes and Sign-in Routing in the final study as they are most sought after navigation scenarios.)
The following is the main findings from the sessions and these might give you some ideas around what to expect from the final study outcome:
In the 60-minute interview, we spent 5 minutes learning about the participant and their current navigation needs, 40 minutes asking them to read out loud the two code snippets (‘Deep Linking with Path Parameters’ and ‘Dynamic Linking’) and 15 minutes to fill out an exit survey.
Overall assessment of VRouter Snippet #1 (Deep Linking with Path Parameters) and #2 (Dynamic Linking):
Overall, participants were able to construct correct mental models for the snippets for Deep Linking and Dynamic Linking. They found Deep Linking code very easy to understand and Dynamic Linking somewhat difficult to understand.
Participant quotes: “Amount of code here is much less than what we have for routing” “Snippet 1 definitely surprised me when I first saw it, since it was too little [code] to have a fully functioning router going. The second one is just a bit harder, but again, a different use case which I think works fine.”
Detailed findings (Note: Emoji ratings are not in the order of participants and sorted for readability)
✅ = The participant had no difficulty with the theme. 🟡 = The participant had some difficulty and needed some hints and guidances. 🛑 = The participant had difficulty and was confused with the core concept.
Snippet #1: VRouter – Deep Linking with Path Parameters | |||
---|---|---|---|
Q1 | Accuracy of Mental Model | ✅✅✅ | Overall, participants were able to construct the correct mental models for Deep Linking. |
Q2 | Classes and formatting | ✅🟡🟡 | Overall classes made sense to the participant. 2 out of 3 participants reported confusion around VRouteRedirector as they were unfamiliar with regular expressions (RegExp)(e.g. ‘:_(.+)’). |
Q3 | Naming | ✅🟡🟡 | Some participants reported confusion around StackedRoutes , although the naming made sense to them after hearing additional explanation. “I would have to look at the documentation”One participant reported potential confusion around “V” as it can be interpreted as vertical. |
Q4 | Ease of reasoning | ✅✅✅ | Participants were able to reason about the code’s navigation behaviors well by reading its implementation. |
Q5 | Readability | ✅✅✅ | Participants found the Deep Linking with Path Parameters code very easy to read. Participants didn’t have to scroll up and down the code to understand them. |
Snippet #2: VRouter – Dynamic Linking | |||
---|---|---|---|
Q1 | Accuracy of Mental Model | ✅✅🛑 | Overall, participants were able to construct the correct mental models for Dynamic Linking. One participant thought the snippet for Dynamic Linking was an alternative version of the Deep Linking (path parameter) snippet. |
Q2 | Classes and formatting | 🟡🟡🛑 | Participants found VRouterKey confusing as they found the syntax unfamiliar.Participants found OnCreate confusing, and needed additional context to understand what it was.“I have some ideas about what’s happening on onCreate but i’m not sure” (One participant scrolled down to understand it better, then again confused by it in VWidgets “Where is onCreate class? Where is it?”) |
Q3 | Naming | 🟡🟡🛑 | Some naming conventions also caused confusion: ‘beforeEnter’ and ‘beforeUpdate’. _“When using the VGuard, both beforeEnter and beforeUpdate call the same exact method, but it's not clear what are the differences between them.”_ |
Q4 | Ease of reasoning | ✅✅✅ | Participants were able to reason about the code’s navigation behaviors well by reading its implementation. One participant on VGuard : “I get this. We need this (in our project)!” |
Q5 | Readability | ✅🟡🟡 | Participants found the Deep Linking with Path Parameters code somewhat difficult to read. Participants had to scroll up and down the code 4-5 times to understand them. |
Changes we are making to the study design:
We decided to make the following adjustments to the study following the pilot interviews:
beforeEnter
vs beforeUpdate
, stackedRoutes
, VRouterRedirector
).dispose
do?’) and add new questions testing the concepts of classes participants struggled with (e.g., VRouterRedirector
, OnCreate
).Next Steps:
Our next steps are to finalize packages to be included in the actual study, conduct code reviews and merge code snippets submitted by the authors, and finally run the study and share outcomes. Feel free to share your thoughts in the comments and stay tuned for more updates.
Thanks, Jack
@lulupointu You might be interested in the pilot results, though I'd recommend you wait on any changes to vrouter until we have the full results. Feel free to let us know if you have any questions.
Nice progress!
I tried vrouter last month because documentation explains about most features I need. The simple example is quite easy to understand. But after trying a complex scenario, I have problem to understand the API "in details". It seems that participants have the same issue as I have (Q2, Q3).
I will write a longer answer with questions latter but I first wanting to say thanks to the team conducting this experiment! This will definitely help to have such in depth feedback.
I see that Q2, Q3 are an issue and would love to know how to improve on that. After spending so much rime working will VRouter
everything looks easy to me, I's great to see the issues and I would love to have maybe suggestions on how to improve on that.
That being said, thanks again! 🚀
Hi, here are comments/questions regarding the results above.
Concerning the study:
Concerning VRouter w.r.t the results:
Regexp
are a tough choice. I think they do bring a lot but I know some people are not familiar with them. I think the regexp .+
used with the syntax :_(.+)
is used everywhere in the examples/docs and will be enough for 95% of dev so I don't think it's a big deal but I would love to hear what you think.stackedRoutes
is another hard one. I used to name it subRoutes
but the issue is that people where sometimes confused about the difference between vanilla navigator 2.0 and VRouter
. Indeed in VRouter
you would use stackedRoutes
when you would just use a list in Navigator 2.0
. Again running a simple example I think people would understand.vRouterKey
might be confusing for new devs but it's used even in Navigator 2.0. Though I think this was not useful here and it it not useful in 90% of cases.onCreate
was my fault. It is a bad code design that I wrote because I was influenced by the code of Navigator 2.0. I will send a PR to remove onCreate
and therefore remove vRouterKey
, I think this will kill two birds with one stone.beforeEnter
and beforeUpdate
are a bit like initState
vs didChangeDependencies
. I would love to hear what you think about this because I don't really know how to make people understand the difference and I do think having both is useful. I could change beforeUpdate
to beforeChange
but I'm not convinced that this would solve the issue.I think that's all! Thanks again for choosing VRouter
and I hope to be able to address the issues you uncovered!
@lulupointu Thank you for your suggestions about the study design. We'll need to think about it. The current procedure doesn't give the participant an opportunity to run the code nor check documentation on their own. However, the moderator gives concise explanation of specific APIs when the participant appears to be confused.
There are a few reasons for the current setup:
For the vrouter
specific results, if you have design options, we could include them in the study as follow-up questions. For example, we could ask the participant, "would calling this API this other name made more sense to you?"
@InMatrix concerning the study, I understand that's not how you did things and why but I still think that what I proposed is a better alternative. When searching for a package I think people will:
Allowing the participant to have 5 minutes to read the doc + the chance to execute the scenario would reproduce that, therefore emulating best someone coming looking for a package.
I might be wrong with how user approach the search and first encounter with a package for a specific task. Maybe you have access to more data than me on the subject which contradict what I am saying, in which case please do!
Concerning the questions vrouter
specific, thanks for this opportunity, I will write some down when I have more time!
For me,
I discussed the study design with @jackkim9 today, and here is what we plan to do based on @lulupointu's suggestion:
#2
and #3
for another snippet.This is largely trying to simulate the process of finding a package, trying to learn about a few use cases, and verifying it actually works. #3
was a compromise in order to save time and avoid uncertainty in the participant's local development environment. We are planning a 90-minute session during which the participant will examine 3 snippets. It's quite tight. Please let us know what you think.
Thanks! I think this is pretty much perfect considering all the constraints. Thanks for listening to the feedback and reviewing a process that you already worked on.
Could we have a breakdown of the 90 in detail? As you did in https://github.com/flutter/uxr/issues/40#issuecomment-821306338? Not necessarily now but when you are decided 😊
Sure, here is a breakdown of the 90 minutes:
After the participant walks through each code sample, they'll answer a few "quiz" questions, such as:
Some of those questions are standard, such as the first one. Others came from our heuristic evaluation of the package, where we generate hypothesized issues users might trip over.
After the quiz, the participant will be given an opportunity to watch the recording of the app in action, and explain whether seeing it helps them better understand the API.
Updates for Navigator API Walkthrough Study (Part of #7)