Closed ghost closed 2 years ago
I wonder how useful this would really be.
Usually, we notice that there's a lack of tests when we see a student's solution that manages to pass the tests but it shouldn't. In most of those solutions, test coverage would be really high, because all the lines are executing on the student's solution most of the time. The problem with missing tests is often not with the code that is in the solution, but with the code that isn't in the solution, and our tests are not forcing that code to be there.
As anecdotal evidence, there's this PR I submitted recently to the Python track, with an example of a wrong solution that was passing all the tests and would have 100% code coverage, while still being wrong: https://github.com/exercism/python/pull/3153
Also running code coverage just against the exemplar solution would say very little about missing tests I feel. It's easy to write a good exemplar solution that has 100% coverage, while wrong solutions and missing tests being possible.
Writing additional tests is one of the more easier things that a contributor can do. So at the very least, test coverage would indicate which exercises they can contribute to.
Usually, we notice that there's a lack of tests when we see a student's solution that manages to pass the tests but it shouldn't. In most of those solutions, test coverage would be really high, because all the lines are executing on the student's solution most of the time.
I'm not sure I understand completely - when there's a lack of tests, a solution is more likely to pass than not, and test coverage is guaranteed to be low.
Code coverage is an important metric for testing. In most cases a higher coverage will ensure that the wrong solution doesn't pass. Of course there could be an exception in a few cases.
Also running code coverage just against the exemplar solution would say very little about missing tests I feel.
The editor already has function signatures that the student has to complete. These function signatures are based on the exemplar solution, so code coverage would say a lot about missing tests.
It's easy to write a good exemplar solution that has 100% coverage, while wrong solutions and missing tests being possible.
Is there something in Go you can point me towards for this, or a issue raised in Go similar to the Python PR (Black Jack exercises are different in Go and Python)?
I also don't think code coverage makes sense for this repository. The repo only contains one "random" example solutions. Running coverage for that would not help us with what the track is about. Imagine an example solution would contain some small extra sanity check for some input that is nice to have but not required by the exercise description. In the sense of the track, it is a perfectly valid example solution file because it passes the existing tests. Let's say the coverage would report 90%. That would NOT mean, we ware missing a test case because what should be covered by tests is determined by the exercise description in the case of Exercism, NOT by the random example solution we have in the repository. Also be aware that exercise descriptions are often times shared Exercism wide (originating from a repo called problem-specifications), a track should not diverge from those without good reason as it means more manual work when maintaining the exercise in the future.
For the reasons outlined above, I will close this issue.
Codecov is a code coverage solution which provides visibility into the parts of code which are not covered by unit tests.
Visibility of an implementation (located in
exercises/concept/<CONCEPT-NAME>/.meta/exemplar.go
) against its unit tests (located inexercises/concept/<CONCEPT-NAME>/concept_name_test.go
) can indicate whether or not enough tests have been written. This would also give contributors and maintainers a clear indication about exercises which need more tests.Example code coverage (and its corresponding repository)