Closed christianp closed 1 month ago
Would like to see some kind of auto submit if a timed test runs out of time. We've had a couple of instances of students forgetting to submit questions while doing a test then getting 0 for it because the timer ran out and nothing had been submitted. Don Shearman Western Sydney University
@dshearman are you using the LTI tool? If so, it can mark unsubmitted answers for all attempts - see https://numbas-lti-provider.readthedocs.io/en/latest/instructor/resources.html#remark.
I found out one reason why our students might be reluctant to press "Submit": in Canvas Quiz, the "Submit" button ends the quiz.
Can anyone think of a reason not to use "Save this answer" for the button label?
Could this lead to confusion if a student changes an answer and then doesn't know which value is being taken for marking? I think my preferred option is the last one in the original post - automatically save answers with a "show feedback" button when appropriate.
I would prefer the last option: whatever typed into the answer field last is an answer. Submitting the answer feels like a non-decision.
I started working on this yesterday. Automatically submitting parts when the focus moves away from that part works OK for the built-in part types.
I'm struggling to think of a way of doing auto-submission with interactive diagrams. It's not clear when you've stopped interacting with a diagram, or what events count as entering an answer.
I think this crystallises a problem that we always had with marked interactive diagrams: the state it starts in could represent an answer, so is it OK to not touch it and just press "submit part"?
It's made even harder by the fact that JSXGraph diagrams aren't explicitly linked to specific parts: it's just the case that when any JSXGraph board is updated, all parts which have a jxg_output marking note have their answer inputs updated based on that note; they're only marked "dirty" if the new value of the input would be different to the previous value.
So when a board updates, we have to check every part to see if the staged answer has changed, and it could be auto-submitted.
I think the principle is that the marking feedback should represent the current state of the question, but not run too often while an answer is being entered. So it might be reasonable to auto-submit parts a certain amount of time after the last interaction with a diagram.
It's even harder with extension
parts: they don't have a jxg_output
note. How do we tell when to auto-submit them?
We still need to decide what to do with the feedback messages area: if parts are submitted automatically, then it needs to be a consistent size so that things in the page don't move after auto-submission.
Here's the current state of affairs, with a diagram linked to a multiple choice input:
https://github.com/numbas/Numbas/assets/19513/854ee41b-09de-4cb6-a759-a3c34ccc0772
I use the mouse to drag a point around. Immediately as I move it and the multiple choice input changes, the part is marked as dirty. 1 second after the input changes, the part is automatically submitted. If I interact with the checkboxes, the part is only automatically submitted once focus moves away from them.
It turns out that adding a jxg_output: interpreted_answer
note to an extension part which refers to a diagram is sufficient to get it to auto-submit.
We had a long discussion session around the implementation of automatic submission, which I will summarise here for our records:
I've compiled the demo exam with this feature and put it online at https://www.staff.ncl.ac.uk/christian.perfect/numbas/automatic-submission-2024-05-29/.
Anybody who is interested, please give it a go and give us your feedback!
Automatic submission is now in v8.0, so I'm closing this.
I think this is a user interface problem.
I think automatically submitting answers would be good: it's a principle in Numbas that you can submit as many times as you like, so just do that whenever the answer changes. It's not a good idea to show the feedback immediately, though - it would be distracting, and students might want to ask for feedback rather than be automatically given it.
Parts where marking takes a long time or consumes lots of resources, such as the code marking extension, might not want to do this. Maybe the maximum frequency should be an attribute of the part that extensions can change?
Adaptive marking and explore mode also present problems: submitting one part can invalidate another.
Options: