Open lucaferranti opened 2 years ago
does the exercism platform redirect automatically display the output to the student. the output of mason test --show
is already fairly informative, so setting for level 1 + give students mason output could be a good start.
I think my understanding of the submission workflow is insufficient. Is the following correct?
mason test
. What I mean, is that no json is generated locally.@kytrinyx , do you have comments suggestions on my above messages? Any pointer on how to get started is very welcome :)
LEVEL 3: let's think about this in the future, sometimes you gotta choose your battles
Agreed.
when they submit the exercise, is the whole folder or only the stub file sent?
If I recall correctly, all the files that are listed in the config.json
as being test and solution files get sent:
https://github.com/exercism/chapel/blob/414a3fc4c867bcef5cdfcd749490bfdc83ad8ecc/exercises/practice/dominoes/.meta/config.json#L6-L12
I think the rest of your assumptions there are correct, but I will defer to @ErikSchierboom, who is our resident expert. Erik, would you take a look at the above and chime in with any suggestions?
LEVEL 3: let's think about this in the future, sometimes you gotta choose your battles
Implementing level 3 only makes sense if you have implemented concept exercises.
If I recall correctly, all the files that are listed in the config.json as being test and solution files get sent: https://github.com/exercism/chapel/blob/414a3fc4c867bcef5cdfcd749490bfdc83ad8ecc/exercises/practice/dominoes/.meta/config.json#L6-L12
Basically, everything that is in the exercise directory as found in the track repo is sent, but with the submitted files overwriting any existing files. The test runner thus has access access to all files in the exercise directory + any submitted files. Does that make sense?
This is the next (and last) big thing to do before getting the track released. Time to get serious with it.
In this issue, I summarize my understanding of the guidelines, to make sure I understand what I have to do and try to sketch a concrete action plan.
Overall, I think this should be easier than it looks, because we should have almost all pieces ready, namely
I understand there are three levels of ambition:
So the main challenge is to implement the logic to generate the JSON.
My rough roadmap:
mason test
and use the resulting exit code to choose what to write in the JSON. Could be done as a small bash scriptmason test --show
and parse it to get the result of each individual test. This can be done with a quick and dirty script. Alternative add a method toTest
class that generates the JSON while running the tests, requires some work but probably not too much.Comments? Suggestions?