Closed SeanBryan51 closed 4 months ago
Code coverage was added to the NCI JULES rose stem tests in https://code.metoffice.gov.uk/trac/jules/ticket/1266. Might be something useful to copy there.
Hey team! Please add your planning poker estimate with Zenhub @ccarouge @SeanBryan51
Unassigning myself from this for now
This should wait for #258 before looking into it.
To set profile generated file on codecov compilation on a specific directory an additional flag should be set prof-dir=<profile-generated-files>
. I was thinking to set it to runs/fluxsite/analysis/codecov
. But the thing is the directory already has to exist during build stage (https://www.intel.com/content/www/us/en/docs/fortran-compiler/developer-guide-reference/2023-2/prof-dir-qprof-dir.html). Now we have some options here:
codecov
after runtime for all experiments.analysis
to top level (along with runs
and src
). Create analysis
dir before building. This would change a bit of bitwise-comparisons as well, but not by much. This seems like the cleanest option to me, but do we want the infomation that the analysis
was for fluxsite
/spatial
/etcshare
folder perhaps)I was thinking to set it to runs/fluxsite/analysis/codecov . But the thing is the directory already has to exist during build stage
The runs/fluxsite/analysis/codecov
seems like a good spot for it. Just remember to create separate coverage directories for each model version in realisations
. On the directory existing before build time, can we create the directory (parents included) before invoking the build?
I think creating the run directories before the build should work fine.
The only problem is having the codecov/
directory underneath the fluxsite
directory. What happens if we want the code coverage only for the spatial runs (benchcab spatial
)? Should the serial compilation use codecov/
under fluxsite
while the MPI compilation uses codecov/
under spatial
?
Or should we bring analysis
up directly under runs
? But then why not move outputs
?
I think codecov/
shouldn't be under fluxsite/
, becuase unlike outputs/
we want code-coverage analysis grouped by each binary irrespective any type of run (fluxsite
/spatial
) for a given set of science configurations. We aggregate the results for all runs done by a single binary using profmerge
in a single folder.
I propose the following hierarchy
.
├── runs
│ ├── analysis
│ │ ├── bitwise-comparisons
│ │ │ ├── spatial (not yet implemented/if no need - then just have fluxsite-bitwise-comparisons as folder name)
│ │ │ └── fluxsite
│ │ └── codecov
│ │ │ ├── <realisation_no>_<serial/parallel>
│ │ │ │ ├── .dpi,.spi and .dyn files for each run based on met forcing and science configurations
│ │ │ │ └── ...
│ │ │ └── ...
│ ├── fluxsite
│ │ ├── logs
│ │ ├── outputs
│ │ └── tasks
│ ├── spatial
│ └── payu-laboratory
Note (unrelated to the issue): Also wanted to ask why we have payu-laboratory
as a separate directory to spatial
? Is it because payu
can also run fluxsite
tasks and is independent of spatial
? If yes, then I think for clarity, runs
could be divided in a stage-wise manner (like setup
/ output
/ analysis
) at the top level.
I think codecov/ shouldn't be under fluxsite/, becuase unlike outputs/ we want code-coverage analysis grouped by each binary irrespective any type of run (fluxsite/spatial) for a given set of science configurations. We aggregate the results for all runs done by a single binary using profmerge in a single folder.
Yep I think that's a good approach. To keep changes minimal, can we put the codecov
directory on the root directory (i.e. same level as runs
) and keep the existing structure in the runs
directory?
Code coverage analysis would allow us to see where in the CABLE code base is being executed for a given set of science configurations. This would help users (and us) in choosing an optimal set of science configurations to use for testing.
Resources: