cul-2016 / quiz

11 stars 4 forks source link

Feedback and scoring for categorised questions #296

Open iteles opened 7 years ago

iteles commented 7 years ago

@stianr Could you please elaborate on the feedback you would like students to receive from categorised questions at the MVP stage and how you would envision this appearing in the user flow?

e.g. I would like to see students receive a single line of feedback saying 'Your best category is ..... '. How is this calculated and how should it be displayed?

stianr commented 7 years ago

At its simplest, I envisage three categories that the lecturer can choose (as well as an N/A alternative):

Logic

Taking the individual student's data, if the smallest of the number of factual, conceptual and application questions answered is below 5:

When you complete more quizzes, you will receive feedback on the type of question that you do well at, and the type you could improve on.

This avoids generating completely spurious feedback after the first quiz.

Best logic (probably for future implementation)

If there are more than 5 responses for each question type, for each type, calculate the proportion of questions answered correctly [i.e. correct / (correct + incorrect), ignoring unanswered questions], and do the same for all other students on the module. Then, taking the subset of students who answered at least 5 questions in each category, calculate the rank position of the student's proportion score on each of the three question types. Determine the category for which the user's rank is highest (H), and the one for which it is lowest (L).

Looking at the different types of question - factual, conceptual, application - relative to other people who have taken the quizzes, you are doing best at H questions. You could improve most on L questions.

Writing it down make it look quite tricky to implement. Perhaps it would be worth keeping a running total, so that when a user completes a quiz, not only is their score recorded, but fields for the number of of correct and incorrect answers in each category is incremented appropriately? But might be too much for this MVP.

Easiest logic to use for now

For now, and much simpler (though less accurate), for each question type, calculate the proportion of answers that were correct, across all answers from all users in the module. Take the current user's score as a proportion of the all users' scores, and use that to give a performance measure for each category.

Worked example:

Across the quizzes the student has completed, they have scored 12/15 (80%) for Factual, 6/10 (60%) for Conceptual, and 4/8 (50%) for Application. Across all questions and all students in the module, the number of correct and incorrect answers were:

The current student's relative score would be 80/90 = .89 for Factual, 60/60 = 1 for Conceptual, 50/40 = 1.25 Application. So H would be Application and L would be Factual. Then feedback can be generated as above.

(Note this shows why we can't use absolute scores - factual questions may just be a lot easier than application or vice versa...)

sohilpandya commented 7 years ago

Todo