vtos / moodle-mod_adaptivequiz

CAT (Computer-Adaptive Testing) implementation for Moodle LMS (https://moodle.org): officially supported plugin.
https://moodle.org/plugins/mod_adaptivequiz
5 stars 9 forks source link

Time limit setting for adaptive quiz? #16

Open XJTLUmedia opened 1 year ago

XJTLUmedia commented 1 year ago

Right now, moodle quiz has time limit on its function. It would be good if there is an implementation on mandatory quiz time limit on the CAT. I saw discussion made by [David Heuring] however, since the algorithm would stop sooner than the proper score/level is determined. So, although we considered doing that, it really isn't an acceptable solution. I suppose one could add a time clock as a reminder to students, but again they could still continue the test. I suppose one could add a time clock as a reminder to students, but again they could still continue the test.

I think a good solution to this problem would be when the quiz reach time limit, CAT automatically flag all remaining question sequentially as wrong answer. Just like how students done each question after that question wrong. So the CAT could still generate a score instead to make this time clock as reminder.

Hope you can implement this as such function has been suggested by so many teachers. Thanks

vtos commented 1 year ago

@XJTLUmedia hi!

CAT automatically flag all remaining question sequentially as wrong answer

We can never know what those 'remaining questions' are. It's the essence of such quiz, decide what next question would be on the fly. Thus, a quiz being an adaptive one.

Also, an adaptive quiz which is forced to be finished doesn't provide any value on measuring abilities for obvious reasons. It cannot provide any sensible score to assess, the quiz hasn't been finished by the taker properly.

What could possibly be done instead is just finishing the attempt forcibly and marking it as a 'time-wise failed' or something.

XJTLUmedia commented 1 year ago

Because the grade would be important when a teacher makes an assessment. It would be hard to decide quantity number of that quiz if marked as "time-wise failed" unless given a pre-set score like 0, and it's hard to decide whether if someone didn't do 4 questions in total perform better than another guy didn't do 5 in total.

For example, if CAT lowest level is 1, highest level is 5. Let's say if one question did right increase 1 difficulty level, and drop 1 if did wrong. if the student have 10 questions and only did 5 questions in time, the question he answered mixed with correct and wrong answers like 3(did correct)-4(correct)-5(correct)-5(wrong)-4(wrong) , then per CAT rule, if he did not write the remaining questions, the algo treat all of them as wrong, the level should be drop to 1, so the final result is 3(did correct)-4(correct)-5(correct)-5(wrong)-4(wrong)-3(wrong)-2(wrong)-1(wrong)-1(wrong)-1(wrong). Similarly, you can get the result if he did 6 questions and didn't do 4 and you can see that if he did wrong, then it have no difference, if he did it right, then there are differences. Simply let the algos run sequentially and flag each sequential question as wrong just like when the student finish the test themselve with wrong answer after the cutoff question. You could also set a additional penalty like drop 2 levels and calculate the final score, so it discourage students to left it not done. I think this example can clarify when I means.