Open adonahue opened 4 years ago
@brecriffs - I think many of the stats we previously reported were generated with a tool you put together. Is that something that is reusable for reporting (previous stats)? Also, did you generate images as well, or was that a separate step? Thanks!
Perhaps worth doing in conjunction with story #112.
@adonahue Are you referring to the metabase stats that I worked on with Julia?
yes @brecriffs
@adonahue Yeah, we just need a backup of the database that we want stats for.
Note: the metabase analytics that was used only generates statistics for mattermost usage. The Riff stats listed above, would require additional analysis. I know we wrote some code for this for the Recommendations, but we might have to revisit it.
@jaedoucette and @jordanreedie - after talking to @ebporter and @juliariffgit - I revised the must-haves for this story, which are all listed at the top. (Julia already has the last one, which is why it's checked).
@jordanreedie - I know you did some of this before, and @jaedoucette you might have an easy(ish) way to get some of the video data now. If you two have some down time out in Chicago, can you put your heads together to figure out the best way to do the story?
As a Project Manager, I need to deliver some Riff course metrics to NEXT, so that I can fulfill the requirements of our contract.
As a Product Manager, I want to learn new things about how EDU was used in the course, so that I can better understand what is and is not successful about the product.
The previous deck in which our Riff stats were reported to NEXT is here: https://docs.google.com/presentation/d/1h1D8H1Vcn023basWTzbklSa7CaxQkEWoabrLYSEN9GU/edit#slide=id.g565c4653f3_0_35
A reference doc with more detail about the metrics is here: https://docs.google.com/document/d/1PTgy6-UzSEtyJ0XVXPRsu4XktjRrUVz0U2pUAEu4HZ0/edit#
Some of these metrics will have already been covered by #253. Double check w. @brecriffs about which ones have already been completed.
Acceptance Criteria
[ ] Spreadsheet data and images for the metrics listed below; in a Google drive shared with the rest of the team.
[ ] Meeting stats, to demonstrate learner engagement on the Riff platform as part of the course experience, including:
[ ] How many learners used Riff a single time.
[ ] How many learners used Riff more than once.
[ ] The average length of a Riff meeting.
[ ] The longest Riff meeting.
[ ] The typical number of participants (perhaps counts of each group size.)
[x] How many learners in a given channel went from message threads to a meeting (suggesting that they switched to a higher fidelity format).
Nice to have
[ ] Video outcomes (as previously reported), so that the role of Riff is understood relative to course outcomes.
[ ] Recommendations, to see if the recommendations feature was successful in getting users to complete specific behaviors. (Simplest ones first, timebox or punt on more complicated ones.)
[ ] Survey summary: The goal of this metric is to understand the aggregate results of the post-meeting survey so that we know 1) how many people took it, 2) if people took it consistently and 3) what the patterns/trends there were in people’s answers, so that we can correlate it to their subjective responses (for developing a leadership metric). The value of this information is that it can see if learners’ engage in this survey model, if their comfort with meetings improved over time, and if we can attribute that in any way to Riff.
[ ] How many total active Riff users there were (peak, and last week of course).