Migrated legacy questions may carry non-standard point values for correct responses and partial credit scores for incorrect responses. These get used by torus in scoring. However, since custom score editing was added to torus authoring, these migrated questions have problems because they do not include the flags being used by the newer torus authoring code to detect custom scoring:
They display in authoring incorrectly as using default scoring when in fact they carry custom scores. So authors can't tell accurately what points are being assigned. This has been a problem for instructors in one class who add new questions to a quiz (picking up default 1 or 0 scoring) and have them mixed with migrated questions that have invisible non-default point values, so not all questions have equal point values as desired.
For multi-input questions, attempts to edit the scoring type, for example changing custom to default, can break the question model, leading to runtime error on question in authoring, making the question uneditable. (Possibly an independent bug applying even to new torus-authored questions)
This PR changes torus score authoring to treat migrated questions with “implicit custom scoring” as having custom scoring. The idea is to package the test for custom scoring into subroutines, hasCustomScoring (for single-part questions) and usesCustomScoring (for multi-inputs), and use these everywhere in place of testing flags directly. Similarly,getOutOfPoints should be used to get the effective correct score value for both representations.
Also the handling of multi-input score type changes was corrected to reset all scores where needed.
Note in the torus implementation, multi-input questions use a different representation of custom scoring (an activity-wide customScoring flag) than single-part questions do (a defined outOf value in the part), so both types of question should be tested. For test cases one may export and ingest the questions from the FCDS course and look at activity bank questions. Searching by item type can find multi-input questions. Some CATA questions reveal partial credit responses.
Note 2: the torus custom score authoring interface does not enforce consistency, in that it allows a user to assign a point value to a wrong answer that is greater than the designated correct answer point value. Such a situation may lead to unexpected scoring behavior. But this is an independent issue.
Migrated legacy questions may carry non-standard point values for correct responses and partial credit scores for incorrect responses. These get used by torus in scoring. However, since custom score editing was added to torus authoring, these migrated questions have problems because they do not include the flags being used by the newer torus authoring code to detect custom scoring:
They display in authoring incorrectly as using default scoring when in fact they carry custom scores. So authors can't tell accurately what points are being assigned. This has been a problem for instructors in one class who add new questions to a quiz (picking up default 1 or 0 scoring) and have them mixed with migrated questions that have invisible non-default point values, so not all questions have equal point values as desired.
For multi-input questions, attempts to edit the scoring type, for example changing custom to default, can break the question model, leading to runtime error on question in authoring, making the question uneditable. (Possibly an independent bug applying even to new torus-authored questions)
This PR changes torus score authoring to treat migrated questions with “implicit custom scoring” as having custom scoring. The idea is to package the test for custom scoring into subroutines,
hasCustomScoring
(for single-part questions) andusesCustomScoring
(for multi-inputs), and use these everywhere in place of testing flags directly. Similarly,getOutOfPoints
should be used to get the effective correct score value for both representations.Also the handling of multi-input score type changes was corrected to reset all scores where needed.
Note in the torus implementation, multi-input questions use a different representation of custom scoring (an activity-wide customScoring flag) than single-part questions do (a defined outOf value in the part), so both types of question should be tested. For test cases one may export and ingest the questions from the FCDS course and look at activity bank questions. Searching by item type can find multi-input questions. Some CATA questions reveal partial credit responses.
Note 2: the torus custom score authoring interface does not enforce consistency, in that it allows a user to assign a point value to a wrong answer that is greater than the designated correct answer point value. Such a situation may lead to unexpected scoring behavior. But this is an independent issue.