datacamp / shellwhat

https://shellwhat.readthedocs.io
GNU Affero General Public License v3.0
2 stars 8 forks source link

Research SCT problems with git and shell courses #38

Closed filipsch closed 5 years ago

filipsch commented 6 years ago

NOTE: The content dashboard does not work properly for courses with sub-exercises. This should be fixed soon.

Currently, the SCTs are Regex based. Figure out to what extent the problems people are having are related to the SCT system being too limited.

A big part of frustration could also be explained by the difference between what the solution tells students to do, and what the instructions suggest, as discussed in https://github.com/datacamp/learn-features/issues/14.

filipsch commented 6 years ago

@gvwilson @martijnsublime @ncarchedi I had a look at the SCTs that were written for both the Git and Shell courses and the code of shellwhat and shellwhat_ext to understand what is possible. I came to the following findings.

SCTs are strictly code based

Problem

Solution

shellwhat already features functions such as:

If the SCT is too strict, these functions should be used instead of the code-based tests (or combined through test_correct(). if the SCT is too loose, these functions should be used in addition to the code-based tests above.

UPDATE: @machow has already provided numerous examples on how to do this in chapter 2 of the test shell course Link to Teach Admin).

Responsible: course maintainer

If it helps, I can update test_expr_output() and add an extra argument to specify the output that the expression should have when running it in the student's shell (overriding the student's output as the 'target' to match the output to).

responsible: Content Engineering

SCTs for working with files are uninformative

Problem

Solution 1: give more feedback

Solution 2: give better feedback

You can use the check_file() function from protowhat to zoom in on the contents of a file, after which you can use test_cmdline().

As an example, the SCT from the second bullet of the [how can I pass filenames to scripts exercise]() can be adjusted from:

from shellwhat_ext import test_compare_file_to_file
Ex() >> test_compare_file_to_file('count-records.sh', '/solutions/count-records.sh')

to:

from shellwhat_ext import test_compare_file_to_file, test_cmdline
Ex().check_correct(
  test_compare_file_to_file('count-records.sh', '/solutions/count-records.sh'),
  check_file('count-records.sh', use_fs=True, use_solution=False).test_cmdline(...)
)
# Still fill in `...`, not sure how to write it yet.

Here, there is the first check to see if the file is correct. If it isn't, the SCT dives into the count-records.sh file to see what's going on and give feedback accordingly. This trick allows you to also specify feedback for code inside files.

Responsible: course maintainer, content engineering if check_file() is not working as expected.

UPDATE: After discussing with @gvwilson this will require more work on the engineering side before it can be done, to handle multi-line shell files.

Interactions with files is tedious

See https://github.com/datacamp/learn-features/issues/14 for a discussion. In short, with the current shell interface, you are supposed to edit files interactively through nano. If people mess up, they get very little information about what they are doing wrong (see previous section). If people give up, they get to see a solution that is a 'one liner fix' for the exercise, but isn't replicating the behavior the student should show.

Solution

Has been described in the learn-features issue in rough lines. It's going to be tricky.

Responsible: LE together with content engineering. This is not something I can do on my own.

Final Notes

filipsch commented 6 years ago

After discussing with @martijnsublime, I will create issues for new functionality that was described above. Every time something is added, documentation will be updated and @gvwilson will be informed.

gvwilson commented 6 years ago

You're a good person.

filipsch commented 5 years ago

Two PRs have been made to both Git and Shell course that rewrite all of the SCTs according to what is possible in the new shellwhat package. I am confident that these changes will have an impact on the quality of the feedback and the course overall. When this is merged, I will close this issue.

filipsch commented 5 years ago

Shellwhat has been significantly improved and cleaned up, and all SCTs for the intro to shell and git course have been rewritten to be both more robust and produce better feedback messages. We are closely following up on the impact this is having.