ls1intum / Artemis

Artemis - Interactive Learning with Automated Feedback
https://docs.artemis.cit.tum.de
MIT License
518 stars 295 forks source link

Exam: Divide the Testing Process into Steps #6621

Open MarkusPaulsen opened 1 year ago

MarkusPaulsen commented 1 year ago

Is your feature request related to a problem?

Currently it is a best practice is to replace working test cases with a single dummy test (that never fails) by issuing a commit during the exam programming exercise creation process to prevent accidental execution of test cases due to misconfiguration. After testing, it is subsequently necessary to revert this commit. This procedure causes unnecessary workload and may lead to problems if handled carelessly.

Describe the solution you'd like

One option would be to split the testing process into four steps:

  1. Compile the test code on the test repository. In case the test code does not compile, deactivate all further steps and notify the instructors.
  2. Compile the student code on the student repository. In case the student code does not compile, deactivate all further steps and display an error message to the student.
  3. Copy the student code from the the student repository into the assignment/src folder of the test code on the test repository. This step is deactivated until the end of the exam.
  4. Execute the test code and display the respective result to the student. This step is deactivated until the end of the exam.

Describe alternatives you've considered

Another option would be to automate the dummy test commit and commit revert, so that the instructors do not have to do it manually.

Additional context

One advantage of compiling the test code separately without the student code is that it enforces the use of reflections to access the student code (a general best practice).

just-max commented 1 year ago

For teaching FPV (the programming language is set to OCaml in Artemis), we solve this by checking whether the deadline has passed, and if so, statically disabling all but a dummy test. This is done by dynamically generating source code, so it happens before student code ever gets the chance to do anything, and should be fairly safe. With this approach, the deadline needs to be set only once in the test repository. Effectively, we perform the four steps you've described embedded into our test framework. Note that for OCaml exercises, the entry point is just a shell script, which makes this possible. Helpful, perhaps, would be to somehow pass the exercise deadline (and maybe even some additional parameters about the exercise?) into the environment of the test runner.

As a side note: one of the current strengths of Artemis, currently, is that the test system is very generic: anything that can be run from a docker container and output JUnit XML files can be used as a test runner (just choose "Empty" as the programming language and configure the CI as desired). By enforcing any kind of system onto what happens inside the runner, you remove some of that flexibility. For example: OCaml (and other more "static" languages, probably) can't easily be compiled without the implementation present and reflection is out of the question. So maybe this would make more sense to add to the (Java?) programming language template.