Closed mdpiper closed 3 years ago
@mdpiper I've forgotten how exactly we test our coverage. It looks like it is part of our ubuntu and Python 3.9 Test action but yet it also shows up as two separate tests (one of which is failing).
@mcflugen The failure is only a nominal sign error (-0.0%). I've seen this before with Coveralls, plus the only change from the successful commit prior to this was an update to the changelog.
@mdpiper I guess my real question is, why do those two coveralls checks show up as separate entries at all? Should they not be part of the Test / build-and-test (ubuntu-latest, 3.9) check? Are we running extra coveralls checks that we don't need to?
@mcflugen I'm not going to guess at what Coveralls or the coveralls-python Action are doing, but AFAIK we're following the directions.
@mcflugen On Coveralls, there's one job listed under a build (these are the two links we see in the test results). My hunch is that since coveralls-python is built for parallel testing, the build is a container for holding multiple testing jobs in a commit, while the one job is our test, run in serial. This also may explain the label "false", since
with:
parallel: false
is assumed by coveralls-python since we didn't set it in the action. It seems likely that Coveralls is only called once.
@mdpiper sounds good. 👍
This PR attempts to fix the successful, but 0%, coverage score reported by Coveralls through the coveralls-python Action.