Closed abhaasgoyal closed 4 months ago
Attention: Patch coverage is 58.00000%
with 42 lines
in your changes missing coverage. Please review.
Project coverage is 70.21%. Comparing base (
bb0ab3d
) to head (1dcd270
). Report is 5 commits behind head on main.
Files | Patch % | Lines |
---|---|---|
src/benchcab/benchcab.py | 10.00% | 18 Missing :warning: |
src/benchcab/model.py | 55.55% | 16 Missing :warning: |
src/benchcab/coverage.py | 78.94% | 8 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I can't think of a test case to run benchcab tests with a code compiled with Debug option
Oh I see, so I thought we wouldn't have to rebuild benchcab
again, but if the tests take too long, we can leave debug
as an option (we still need debug-codecov
right?). Although I was thinking we could have additional options in the future, like release-profile
, which benchmarks the code in Release
builds and needs different flags.
But setting
build_option
and thecoverage
command are independent of each other.
benchcab coverage
is run after the fluxsite jobs are completed (similar to benchcab fluxsite-bitwise-cmp
), however build_option
is set in config.yaml
to pass in the flags before the build (which has additional binary instrumentation), before the tests are run. So, there is a dependency of benchcab coverage
being run only if build_option
has coverage
.
Are we supposed to have run benchcab first (benchcab run) with code coverage options on for the compilation and then run benchcab codecov?
Yes, after the jobs are completed
Are you assuming we aren't going to check the code coverage manually?
I thought benchcab codecov
could make things simpler by providing a central utility function to run commands across all realisations. Now that I think about it, we can have designs like:
benchcab codecov
: Provide an error if config.yaml
doesn't have build_option
set as debug-codecov
benchcab codecov
: After the PBS job if config.yaml
has debug-codecov
, unless skip
flag has it.Both options above make the relationship explicit.
Happy to hear alternative designs/requirements.
Instead of enabling code coverage via
build_option
, have you considered introducing a globalcodecov
option? This could either be a configuration file option (e.g.codecov: true
) or a command line option for the relevant subcommands (e.g.benchcab run --codecov
where--codecov
is propagated to other commands likebuild
,fluxsite-submit-job
, etc).
I think it would be better to have codecov: true
at the top level, since if we run benchcab gen_codecov
at the end, we can check if config.yaml
had already set it.
I see (implemented the respective changes) and tested. I'll squash the commits once ready to be merged.
Resolves #91
Description
benchcab gen_codecov
for running code coverage analysis for all runs grouped by each realisation's Intel build. Runs only ifcodecov: true
inconfig.yaml
. Added as a workflow step in fluxsite runs.config.yaml
keyword (codecov
) - which provides the compiler build flags to rungen_codecov
later onNote: As of now,
codecov
analysis is only supported w.r.t.fluxsite
tasksTesting
On the following
config.yaml
file, runbenchcab
:After the job has completed, run the code coverage utility:
On inspection, it generates the files as expected at
runs/codecov/R0/
.