Closed filipsch closed 5 years ago
@gvwilson @martijnsublime @ncarchedi
I had a look at the SCTs that were written for both the Git and Shell courses and the code of shellwhat
and shellwhat_ext
to understand what is possible. I came to the following findings.
test_student_typed()
is used in very simple cases:
ls
git diff
git status
Note that the entire git course uses only test_student_typed()
for coding exercises.
test_cmdline()
function, authored by Greg, is used for more advanced commands in the shell course:
cut -d , -f 2 seasonal/autumn.csv | grep -v Tooth | head -n 1
head -n 1 $testing
This function takes over the responsibility of an advanced parser that is robust against multiple variations of calling a function. So in a way, we do use parsing.
ls
in /home/repl
, the default dir.
If students do ls
, it is accepted. The following submission is also accepted:
mkdir test
cd test
ls
In other words, the SCT accepts wrong answers.
test
folder, but we don't care how. All of the below should be accepted:
ls test
ls /home/repl/test
ls ../repl/test
It is very hard to allow for all these answers with simple regexes.
shellwhat
already features functions such as:
test_output_contains()
, that can string match the output.test_expr_output()
, that can run an expression in the shell, fetch its output, and see if that output is found in the output the student generated.If the SCT is too strict, these functions should be used instead of the code-based tests (or combined through test_correct()
. if the SCT is too loose, these functions should be used in addition to the code-based tests above.
UPDATE: @machow has already provided numerous examples on how to do this in chapter 2 of the test shell course Link to Teach Admin).
Responsible: course maintainer
If it helps, I can update test_expr_output()
and add an extra argument to specify the output that the expression should have when running it in the student's shell (overriding the student's output as the 'target' to match the output to).
responsible: Content Engineering
For some exercises testing file contents, no manual feedback message is provided. For example, the second bullet of the how can I pass filenames to scripts exercise just compares the two files, and produces pretty uninformative feedback if the line in the file is wrong:
Line(s) in count-records.sh not as expected: 1
This is not helpful to the student at all.
In addition, it's either one or the other: either you give manual feedback, either you depend on automatically generated feedback. Manual feedback is custom and can take a guess at what people did wrong, but with advanced automatically generated feedback you can actually tailor the message to the error at hand.
shellwhat
functions so you can append the custom message to the automatically generated message, through an append=True
arg. That way, student get targeted feedback, and an additional humanly generated hint at what could be wrong. (responsible: Content Engineering)You can use the check_file()
function from protowhat
to zoom in on the contents of a file, after which you can use test_cmdline()
.
As an example, the SCT from the second bullet of the [how can I pass filenames to scripts exercise]() can be adjusted from:
from shellwhat_ext import test_compare_file_to_file
Ex() >> test_compare_file_to_file('count-records.sh', '/solutions/count-records.sh')
to:
from shellwhat_ext import test_compare_file_to_file, test_cmdline
Ex().check_correct(
test_compare_file_to_file('count-records.sh', '/solutions/count-records.sh'),
check_file('count-records.sh', use_fs=True, use_solution=False).test_cmdline(...)
)
# Still fill in `...`, not sure how to write it yet.
Here, there is the first check to see if the file is correct. If it isn't, the SCT dives into the count-records.sh
file to see what's going on and give feedback accordingly. This trick allows you to also specify feedback for code inside files.
Responsible: course maintainer, content engineering if check_file()
is not working as expected.
UPDATE: After discussing with @gvwilson this will require more work on the engineering side before it can be done, to handle multi-line shell files.
See https://github.com/datacamp/learn-features/issues/14 for a discussion. In short, with the current shell interface, you are supposed to edit files interactively through nano
. If people mess up, they get very little information about what they are doing wrong (see previous section). If people give up, they get to see a solution that is a 'one liner fix' for the exercise, but isn't replicating the behavior the student should show.
Has been described in the learn-features issue in rough lines. It's going to be tricky.
Responsible: LE together with content engineering. This is not something I can do on my own.
shellwhat_ext
extension package, and we can test what we want to test.check_output_expr()
could be extended to expect arbitrary outputs, instead of only the output that the student generated.check_file()
could be improved so it can be piped into test_cmdline()
.After discussing with @martijnsublime, I will create issues for new functionality that was described above. Every time something is added, documentation will be updated and @gvwilson will be informed.
You're a good person.
Two PRs have been made to both Git and Shell course that rewrite all of the SCTs according to what is possible in the new shellwhat
package. I am confident that these changes will have an impact on the quality of the feedback and the course overall. When this is merged, I will close this issue.
Shellwhat has been significantly improved and cleaned up, and all SCTs for the intro to shell and git course have been rewritten to be both more robust and produce better feedback messages. We are closely following up on the impact this is having.
NOTE: The content dashboard does not work properly for courses with sub-exercises. This should be fixed soon.
Currently, the SCTs are Regex based. Figure out to what extent the problems people are having are related to the SCT system being too limited.
A big part of frustration could also be explained by the difference between what the solution tells students to do, and what the instructions suggest, as discussed in https://github.com/datacamp/learn-features/issues/14.