Open hjmjohnson opened 5 years ago
@afni-rickr, if we put the test data in https://github.com/NIFTI-Imaging/nifti-test-data.git we won't bloat this repo in the process. If it ends up being too much for github we can address that at a later date.
On Mon, Dec 17, 2018 at 5:49 PM Hans Johnson notifications@github.com wrote:
@leej3 https://github.com/leej3 mentioned that @afni-rickr https://github.com/afni-rickr had more test cases that might be easily added.
49% is pretty low for test coverage. https://my.cdash.org/viewCoverage.php?buildid=1581308
At a minimum we should review what is not covered to identify functions that are not needed, or verify that the uncovered code is of low importance.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NIFTI-Imaging/nifti_clib/issues/33, or mute the thread https://github.com/notifications/unsubscribe-auth/AFKsqMAzXDIxFw22jA1XSBjunxQoh3-Uks5u59k6gaJpZM4ZW1tr .
It looks like this 'commands' test tree is still the same as what I have for testing nifti1_tool (the NIFTI-1 only version of nifti_tool). The NIFTI-2 compatible nifti_tool actually does not have many new tests, with the newer ones looking at CIFTI input. Maybe it would be better to ponder some new tests that related to code coverage.
@hjmjohnson Pondering coverage, do we have a current coverage result? I don't see anything at my.cdash.org, or maybe I just don't know how to find it.
Making a coverage build it not that hard, we just need to add the right flags. Let me have a quick look to see if the flags would be easy to include.
It appears that coverage testing is being done, but the my.cdash.org site is rejecting the results. I have sent an e-mail to the webmaster@cdash.org to see what can be done about this.
Performing coverage
Processing coverage (each . represents one file):
..........
Accumulating results (each . represents one file):
..........
Covered LOC: 5800
Not covered LOC: 9400
Total LOC: 15200
Percentage Coverage: 38.16%
Add file: /home/travis/build/NIFTI-Imaging/nifti_clib/cmake/travis_dashboard.cmake
Add file: /home/travis/build/NIFTI-Imaging/nifti_clib/cmake/nifti_common.cmake
Submit files (using https)
Using HTTP submit method
Drop site:https://my.cdash.org/submit.php?project=nifti_clib
Submission failed: Maximum number of builds reached for nifti_clib. Contact webmaster@cdash.org for support.
FYI: Fixed coverage reporting on my.cdash.org: https://my.cdash.org/index.php?project=nifti_clib#!#Coverage
I wonder if duplicating the nifti1 tests in nifti2 would drastically increase coverage.
I wonder how these are being computed. Maybe the multiple-command script tests are not being properly evaluated. When I run only the 4 c21.[abcd]* scripts locally, it shows 90% functional coverage (in both nifti2_io.c and nifti_tool.c), including 75% line coverage. Much of what is missed is basically error conditions that are not encountered. znzlib.c shows 100% functional coverage and 95% line coverage. It would not be surprising for it to not evaluate scripts correctly.
It would be ugly and irritating to break up those script tests, but maybe that is necessary for cdash to evaluate. I could add a single-line test based on c21.d.misc.tests just to see the effect, e.g. nifti_tool -run_misc_tests -debug 2 -infile $infile2
FOUND IT! I'll work on the solution now.
UPTO 60% with the shell scripts turned on!
This is great, thanks!
@leej3 mentioned that @afni-rickr had more test cases that might be easily added.
38% is pretty low for test coverage. https://my.cdash.org/index.php?project=nifti_clib#!#Coverage
At a minimum, we should review what is not covered to identify functions that are not needed or verify that the uncovered code is of low importance.