Closed niknetniko closed 3 years ago
First reflections after reading the proposal
ScoreItem
instead of GradeItem
(or Grade
instead of Score
) to make it more consistent?ScoreItems
and an (automatically computed) total score might be sufficient for a first iterationScoreItem
s might have several properties; they should not all be there to start with, but just a list of things that come in mind
why not ScoreItem instead of GradeItem (or Grade instead of Score) to make it more consistent?
This was because with "ScoreItem" and "Score" it isn't obvious to me which is which, but I agree that "GradeItem" doesn't solve this. Perhaps a better name would "ScoreBlueprint" or "ScoreDistribution" (from 'puntenverdeling').
do we need a tree to start with? just some
ScoreItem
s and an (automatically computed) total score might be sufficient for a first iteration
We don't really need the tree, but:
ScoreItem
for the automatically computed total, but this also means we cannot easily apply the same options to it (e.g. the visibility and type).ScoreItem
for the total, but then we also need some notion of parent and child ScoreItem
s, and then we basically have a tree.(Of course, this tree structure is internal to Dodona, we don't have to expose this: the UI can look like some ScoreItem
s per exercise, with a computed total)
I guess the scoring system will initially be fixed after creating an evaluation? afterwards we might consider what the options are to make changes: what should be done with scores that are already assigned?
A "simple" solution would be to unmark all completed submissions as done, meaning they would need to be evaluated again, which is probably OK, since I assume if you change the scoring system, you'll need to update the scores
The latter might result in a lot of work being lost. We might start with a fixed scoring scheme and then later relax some changes. There are the obvious changes that don't really impact the scores given:
Rather than "throwing away" scores given before changing the score system, we might keep the scores given, but undo the "completed" status of their review so they need to be reviewed again (with the old scores still in place). Here's some actions that might be considered:
@niknetniko these might be interesting to read through for inspiration (models and terminology):
Thoughts/proposal for a first version:
Terminology
Proposed is to make this a tree structure, with the root of the tree being the grade item for an exercise. It is then possible to construct a tree to reflect how the scores should be assigned: x points for function A, y points for function B, etc. This is up to the evaluator. (Every ScoreTemplate would be linked to an EvaluationExercise in Dodona)This is a list of items for each exercise.Flow
The intended flow of grading would be like this:
Future
Things that are not for this issue, but can be done later if we determine the basic version is deemed OK
LTI integration
In LTI, there are some different concepts:
Some problems need investigating:
Penalties
Another useful addition would be the concept of penalties: annotations with an assigned score that is deducted from the total score for an exercise.
The main difference with the scenario where the evaluator fills in the scores is that here, each student starts with the maximum available points, and the annotations are subtracted from that score. (e.g. 10/10 for a function, but -1 penalty because of "break/continue" => score of 9/10).
This will need some thought on how both modes can be integrated together, and how the penalties act when the evaluator wishes to override the final score. One possibility is allowing a custom start score (e.g. the student starts with 9/10, from which penalties are deducted). This might feel arbitrary to students though (e.g. why do I have 7/10, when I only have two penalties of -1 each?). Another option is to disallow mixing both modes: either the student starts with 0 and the evaluator fills in the score, or the students starts with the max score and the evaluator must use penalties to deduct points.