Open Fenmore opened 9 months ago
Thanks for the suggestion.
As a workaround for now, you could weight your easy tests with 0 points and use Assumptions to skip the harder tests if the easy ones are not successful. Then the students can only earn points if they manage to pass the easy tests (they are then acting like a precondition).
If you use JUnit, a setup like this could work:
@TestClassOrder(ClassOrderer.OrderAnnotation.class)
class MyTest {
private static boolean passedEasyTest = false;
@Test
@Order(0)
void easyTest() {
Assertions.assertTrue(true); // Your test here
passedEasyTest = true;
}
@Test
@Order(1)
void nonEasyTest() {
Assumptions.assumeTrue(passedEasyTest);
// ...
}
}
Thanks for the suggested workaround.
Unfortunately we cannot make the rewarding test cases fail when students fail some easy ones. They should be penalized independently since their submission isn't as good as those which pass easy tests, but we must not make them fail because after all it is no mandatory condition for that course.
Is your feature request related to a problem?
The current grading system doesn't support grading for programming exercises that contain test cases that are rather expected to be passed anyway than "worth" the points.
Currently by accumulating points, students are rewarded for passing a test case that is worth its effort or significance to do so in respect to all other cases in that exercise that reward points. Therefore weighting a test case that is not considered a reward but rather an expectation that should be fulfilled anyway becomes quite difficult. Such test cases will be called "easy" now. Ideally only test cases that target the main goal of the exercise should provide points as that is what the challenge is about. Therefore it leaves any easy test case with no weight assigned, making them irrelevant to pass anyway. This has been proven to us in the past, hence the effort to raise the motivation for our students to pass them too.
Example: There is an exercise about implementing advanced algorithms. Coding generally with good performance is considered an easy task here, but not a mandatory condition. So test cases about the algorithms must not take the performance into account. Therefore performance and algorithms test cases have to be separated, creating mentioned split in those that reward and those that are considered easy.
Increasing the weight of an easy test case now has a few major drawbacks.
Describe the solution you'd like
Quite simple: The system used for SCA to deduct points for a category should be optionally usable for individual test cases as well. Making some test cases deduct an absolut number of points that is not dependent on other test cases and can exceed the maximum number of points as deduction.
Describe alternatives you've considered
We haven't been able to come up with an alternative. Testing those "easy" cases is quite important for us, just leaving them out is no option.
Additional context
No response