Open nwc10 opened 5 months ago
Sorry, was not clear - I think that the "feature" cover
would need is "treat this list of files on the CLI as having 0 coverage".
Scanning recusively, filtering and so on is the job of script or human invoking it, as I don't think it's possible to create a sufficiently generalised filter rule language for the command line. My example happens to have filtering because that's what I needed. I would assume I'd re-write it as my code doing filtering to get a list of files, and then shell out to cover
with that list.
(OK, possibly whichever globbing syntax it is that includes **
to mean "any number of levels of directory" might just make it work self-contained - by implication that would be in combination with all the usual glob contructions, particularly comma seperated lists in {
}
. But I doubt that is easy to implement without dragging in more dependencies)
In the generated report, Devel::Cover completely ignores files which have no coverage (because no part of the test suite ever ran them). For our use case, this makes the report a bit deceptive and dangerous because it gives the impression that we have reasonable coverage, when actually we have scary big holes, where we shouldn't be refactoring (yet)
So I wondereded if it's possible to force uncovered files to be treated as 0%. I asked about this on the london.pm list but it seems no-one had attempted this. So here goes...
I hacked together a script for work - cover-touch.pl
Without this the coverage report looks like this:
With this:
we see that the UNKNOWN UNKNOWNS are now improved to KNOWN UNKNOWNS.
(Output stolen from the talk http://act.yapc.eu/gpw2024/talk/7872 - video not online yet)
I have no idea if this is generating "correct" (let alone "optimal") cover_db runs, but it does seem to do the job. However, the report generation code then spews bazillions of
use of uninitialized value
warnings because some part of a data structure is (partially) created. It is not expecting this - it appears that the "outer" level reference that now exists for that file causes code to be entered that wan't previously, but this code then assumes a multi-level structure exists with values for statement, branch and condition coverage, and so is readingundef
when it expects a number.This patch "shuts it up" but I don't think that it's really correct, as we end up with the "average number" for the file being
0.0
(seen above) whereas I'd like it to ben/a
(I believe there's at least one talk on YouTube that says work is on 5.32.0 ("this week"), so this isn't a leak)
I'm not sure where to take this from here. I guess
cover
to create these placeholder runsn/a n/a n/a n/a n/a n/a n/a
(instead ofn/a n/a n/a n/a n/a n/a 0.0
)n/a
counts as part of the denominatorI don't know how to submit a patch to do any of this. Or even if it's the right plan.
Right now (I think) if I have 5 files with totals of 0.6, 0.7, 0.8, 0.9 and 1.0, and 5 files completly uncovered, my overal "average" is
0.8
, which is highlighted as orange. With my hack, those 5 files will now show up, but the averaging ignores them. I'd rather like the averaging to treat the overal totals as 0.4, which is very red.It's actually a bit of a "bug" - right now if I delete the test for a module with low coverage, my reported average coverage goes up. Game the system!