rifflearning / zenhub

This is the master repository for the Riff Projects in our ZenHub Workspace
0 stars 0 forks source link

Data Analysis, NEXT Canada Course. #67

Open adonahue opened 5 years ago

adonahue commented 5 years ago

This story is to allocate time for doing analysis of the NEXT Canada course data. Directional requirements about what the analysis should look for are here: https://docs.google.com/document/d/19HvPJXBceO6NbRXvyR93O79tIHuk3N1TmNuYiH_4XCA/edit?ts=5d0bb181. @adonahue is responsible for providing finalized requirements for this story.

Perform Analytics (Rough Sketch)

Bus Req, are internal and related to the Grant, and also how we want to find and present postive product findings. Also to help frame the marketing of the course design and Riff's role in the course experience.

jaedoucette commented 5 years ago

Preliminary results in this document: https://docs.google.com/document/d/1Mp1sVU-FYZrxEPVvmi0W-UAmIiLCUZpH9sefTNNdMsc/edit?usp=sharing

Results are fully scripted. Next tasks:

[X] Create a template for a user-readable report (Tex perhaps?). [X] Map results to report template. [] Automatically upload reports to a shared directory on Google Drive [x] Add Reports on early Riff Video usage [x] Add reporting on course completion (via soft drops file) [x] Add Reports on time spent using Riff Video [] Add Reports on Riff text interaction [] Add Reports based on demographic features [] Add reports based on customer satisfaction. [] Add reports on changes in conversational patterns of groups over time (harder).

jaedoucette commented 5 years ago

@ebporter @juliariffgit

Not super urgent, but I need some extra information about how the Next course was run before I can finalize conclusions for parts of this, and in order to be confident about the data:

  1. Did students have to use Riff video chat for any reason? (e.g. a mandatory assignment? An optional assignment?)

  2. Were students explicitly encouraged to use Riff to make video calls during the course, and did these encouragements occur after some students had already dropped out? If so, when did they occur exactly?

  3. What were the criteria for becoming certificate eligible? Is this something students chose or paid for? Or something they earned?

  4. Only students were assigned to named cohorts, right? No non-students? It looks like this is true, but need to be sure.

  5. It looks like everyone who was not enrolled was some kind of staff or mentor. Do we have records of anyone who enrolled and then dropped? Do we have a reliable way to distinguish these from other students, or is this something I should build from, e.g. MatterMost data (even there, it doesn't look like there's a lot of extra records)?

juliariffgit commented 5 years ago

Hey, @jaedoucette.

Answers for you:

  1. While they had a number of assignments that prompted and encouraged them to use Riff Video, they would have been able to complete all mandatory assignments by using some other type of collaboration tool, like a hangout or conference call.

  2. This is a bit complicated, so I'll try to write it here but if we need to chat it through, just let me know. In a nutshell, the course was split into two halves. In the first half, all registered learners were manually assigned to small Riff groups they did not choose. These were supposed to have 5 members, but there were some groups where there were only a few truly active participants. We did reassign lone wolves to new groups if stragglers never joined or engaged with the originally-assigned group, but there was about a 2 week period where many of those manual teams were in flux.

The second half of course was made up of groupwork where learners selected their own teams based on their interest in a topic. These in general had more engagement, but there were still a few lone wolves and people who joined a group and then disengaged.

Learners were encouraged to chat via Mattermost in special private rooms, and meet via video to talk through assignments or when chat wasn't the best mode of communication.

  1. Certificates were earned by students scoring an 80% or better, with weighting at 65% for the Capstone project and 45% on collaboration, pitches and coding. Whether paid enrollments or not, every enrolled learner was allowed to attempt to earn a certificate.

  2. Mostly true, BUT...Jason the TA did join some cohorts if there was space for him. This made him more available to learners in private rooms. And Capstone mentors did join the Capstone cohorts in order to offer guidance.

  3. We have a document outlining registrants, hard drops and soft drops. I'll share that with you via Drive now. Let me know if that has the info you need.

jaedoucette commented 5 years ago

@juliariffgit Thanks, this is super helpful. Some followups:

Replies to 1 & 2 make perfect sense to me.

  1. My datasets list two variables: "Certificate Eligible" and "Certificate Delivered". Is there a distinction between these two?

  2. Hmm. The "cohort" field of the datasets has one entry for "Capstone Mentors", and others for various kinds of staff. When you say they joined the student cohorts, did they make a separate account to do this, or was that really more like joining one of the cohort's channels?

  3. Thanks, this is very useful.

juliariffgit commented 5 years ago
  1. Ah. Yes. This is because of openedx/Appsembler. Each student who is eligible has to request their cert in order for it to generate. Some learners invariably forget to do this. So everyone "eligible" earned it, but only those "delivered" are actually appearing on Dashboards. There is no way to auto-deliver the missing ones (sigh)

  2. I see. So..."cohort" is an appsembler/openedx construct. I tried VERY hard to assign all staff to an appropriate cohort in openedX. BUT...some staff joined "student" rooms in Mattermost DIRECTLY by being added via Mattermost. Openedx "cohort" or "group" membership should mostly match Mattermost "room" membership, but may not in all cases.

jaedoucette commented 5 years ago

@juliariffgit Okay. 3 Makes sense now.

For point 4, I think that should be okay. As long as the openedx cohorts are correct, I can filter out staff from the mattermost data fine. The big thing is whether the openedx cohorts can be trusted. It sounds like they can be, even if Mattermost rooms might not be?

jaedoucette commented 5 years ago

@adonahue I think this was probably larger than my estimate of 13, and that we should split it into smaller stories.

My suggestion is to leave this one as-is, with an estimate of 13, and the following featureset:

[X] Create a template for a user-readable report (Tex perhaps?). [X] Map results to report template. [X] Add Reports on early Riff Video usage [X] Add reporting on course completion (via soft drops file) [X] Add Reports on time spent using Riff Video

I will provide a final set of reports to you for these, with your acceptance being the criteria for closing this card.

Additionally, I think we should make the following new cards, and I can take on some of them during the next sprint:

[] Automatically upload reports to a shared directory on Google Drive [] Add Reports on Riff text interaction [] Add Reports based on demographic features [] Add reports based on customer satisfaction. [] Add reports on changes in conversational patterns of groups over time (harder).

If this sounds good, I can make those cards, and flesh them out a bit.

adonahue commented 5 years ago

Yes that sounds good to me, thank you @jaedoucette !