learningequality / kolibri

Kolibri Learning Platform: the offline app for universal education
https://learningequality.org/kolibri/
MIT License
768 stars 646 forks source link

Enable multiple attempts for exercises, in the same style as practice quizzes #9851

Open marcellamaki opened 1 year ago

marcellamaki commented 1 year ago

Updated Issue

As reported in the community forum, exercise completion is not always clear to a coach.

Based on a conversation in slack with Design, @rtibbles and @lauradanforth, we will be implementing "multiple attempts" for exercises, similar to how we have implemented them for practice quizzes. This will add nuance to completion, and help disambiguate scenarios such as:

  1. the learner has attempted all questions, but has not met the mastery criteria associated with the exercise
  2. the learner has attempted all questions, and successfully met the mastery criteria associated with the exercise
  3. the learner has attempted all questions, and successfully met the mastery criteria associated with the exercise, but then restarted the exercises and is no longer meeting mastery criteria

Additional thoughts to consider:

That way we can communicate 'completed once', 'needs help' - this helps to give the appropriately nuanced message that the user has previously completed this exercise satisfactorily, but upon revisiting has not retained that previous mastery.

I can imagine a scenario where a student first engages with an exercise right after learning the material and masters it quickly, but then revisits weeks later and has lost some of the understanding in the interim

This has been retargeted to 0.16 as it will require new strings

Description of how to create this scenario in the current Kolibri context is described below.


Previous issue context

As reported in the community forum, it seems like there may be some bugs with the reporting when learners "need help" and when they have completed an exercise.

The completion criteria is "3 of last 4 questions correct" and it seems the student did achieve that.

Slack thread for reference in support channel

marcellamaki commented 1 year ago

@radinamatic could this go into the QA queue to be replicated, hopefully with a bit more detail 🤞, after we get the current Kolibri patch release out? Thank you!

bjester commented 1 year ago

There appears to be a possibility of a bug in the answer history, which should be ordered chronologically. See the screenshots, where the most recent and third question is shown second in the report-- in addition the 2nd question is shown 1st, and the 1st is shown 3rd. Initially with 2 questions answered, the order was correct.

Learner exercise side Coach report side
Screenshot from 2022-11-21 13-59-56 Screenshot from 2022-11-21 14-00-07
pcenov commented 1 year ago

Hi @marcellamaki, here are the steps to replicate this.

  1. As a coach assign an exercise to a learner (for example Make 10 (grids and number bonds) from the QA channel.
  2. As a learner answer all the questions correctly until you see the completion modal.
  3. Close the completion modal and proceed entering incorrect answers.
  4. As a coach go to Reports > Lesson > > Learner report and observe that even though the exercise is completed, the coach is seeing the red "Needs help" icon.

This can be confusing for the coach as there is now no way of them easily guessing that the question has been already completed by the learner. Otherwise on the learner's side the question is still correctly marked as completed, and it's also correctly marked as completed in the exported session logs file.

2022-11-23_16-56-31

marcellamaki commented 1 year ago

@jtamiace I have reframed this issue based on the slack conversation and it is ready for design input. Thanks!

jtamiace commented 1 year ago

Thanks for the clear layout of the issues here! Chatted with @tomiwaoLE about this earlier today and he'll be taking this on

rtibbles commented 1 year ago

See also #4574

tomiwaoLE commented 6 months ago

Summary of conversation with @rtibbles on GitHub issue #9851:

New criteria/objective: Focus initially on practice quizzes?

Suggested UX flow 1 - within quiz/assessment:

  1. Quiz completion validation: Ensure that the quiz has been completed by the user before initiating further actions.
  2. Offer repetition challenge: After quiz completion, present users with the option to retake the quiz or specific questions to reinforce learning. This could be a voluntary action based on user preference.
  3. Timed and score thresholds: Add both timed intervals and minimum score thresholds as criteria for suggesting a repetition challenge. This ensures that suggestions are made at appropriate times and only when necessary for learning reinforcement.

Suggested UX flow 2 - outside quiz/assessment:

  1. Mastery and periodic question reengagement timing: Develop mechanism to periodically reintroduce questions that learners have previously completed, using varying reengagement time intervals to optimize retention and mastery and having with varying 'weights' for different time intervals.
  2. Spacing strategies for resurfacing questions: Determine spacing of question reengagement to balance between frequent review and over-repetition.
  3. Criteria for stopping suggestion prompts: Establish criteria to decide when to cease reengagement suggestions, potentially based on performance indicators or learner feedback. Spacing strategies for resurfacing questions

Presentation of Multiple Attempt Results: Talk to @jtamiace to explore work done on effective ways to display results from multiple attempts.

rtibbles commented 6 months ago

@tomiwaoLE Flagging that we do already allow repeats of practice quizzes - it's the one place we do have it, so we shouldn't just be focusing on that here - in the practice quiz case we do show the 'time taken' for a practice quiz, and allow voluntary retakes. There's a suggested time taken that can be set, so we could leverage that for exercises as well.