Closed WGierke closed 6 years ago
Wouldn't it be better to push to an external service like coveralls (or similar). This way we get nice GitHub integration where we say "your PR decreases coverage by 2%". I believe some of these services even let you block merges (!!) if you decrease :)
@lamby I tried this "spike" for a couple of reasons. First, I don't have the rights to add such an integration like coveralls to the repo :-) Second, one would always be required to first push to GitHub to receive a test report if I understand it right. While this not only takes way longer than running the report generation locally, there are also tests that always time out on Travis which means that one can never fully rely on the report generated online. However, I understand your point.
@isms Any thoughts? :)
Apparently codecov.io supports Coverage.py. I'm currently trying to add an upload of the generated report to codecov.io on my fork. This way we can have both - a nice HTML version that can be generated locally (if need be) and a repository integration that shows us the changed coverage that would be induced by merging a branch.
My thought is that a text report in the build is all we're looking for to close the issue. As @WGierke mentioned an integration is easy and just a matter of configuring the build tool.
Anyway, fiddling with Travis is out of scope for contributors, so my recommendation is still just to generate the text report as part of the testing commands and leave it there. (We may or may not loop in a third party service this late in the game.)
@lamby @isms Now, we just generate the standard report. If someone wants to add that the report should be uploaded to codecov.io: it worked for me by running coverage xml
and uploading it using bash <(curl -s https://codecov.io/bash) -t THE-CODECOV-TOKEN
to codecov. Maybe it's not necessary to specify the token when uploading from a Travis build but I'm not sure.
PS: the report from the current build:
prediction service:
Name Stmts Miss Branch BrPart Cover
-------------------------------------------------------------------------------------------
__init__.py 0 0 0 0 100%
config.py 22 0 0 0 100%
src/__init__.py 0 0 0 0 100%
src/algorithms/__init__.py 0 0 0 0 100%
src/algorithms/classify/__init__.py 0 0 0 0 100%
src/algorithms/classify/src/__init__.py 0 0 0 0 100%
src/algorithms/classify/src/gtr123_model.py 151 1 26 1 99%
src/algorithms/classify/trained_model.py 4 0 0 0 100%
src/algorithms/identify/__init__.py 0 0 0 0 100%
src/algorithms/identify/prediction.py 190 153 58 0 15%
src/algorithms/identify/src/__init__.py 0 0 0 0 100%
src/algorithms/identify/src/gtr123_model.py 284 229 68 1 16%
src/algorithms/identify/trained_model.py 43 26 18 1 36%
src/algorithms/segment/__init__.py 0 0 0 0 100%
src/algorithms/segment/src/__init__.py 0 0 0 0 100%
src/algorithms/segment/src/evaluate.py 31 0 0 0 100%
src/algorithms/segment/src/models/__init__.py 0 0 0 0 100%
src/algorithms/segment/src/models/segmentation_model.py 38 10 0 0 74%
src/algorithms/segment/src/models/simple_3d_model.py 39 7 2 0 78%
src/algorithms/segment/trained_model.py 28 0 6 0 100%
src/factory.py 19 4 4 2 74%
src/preprocess/__init__.py 0 0 0 0 100%
src/preprocess/crop_dicom.py 21 1 8 1 93%
src/preprocess/crop_patches.py 33 4 16 5 82%
src/preprocess/errors.py 5 0 2 1 86%
src/preprocess/extract_lungs.py 171 88 63 2 47%
src/preprocess/generators.py 313 59 172 25 78%
src/preprocess/improved_lung_segmentation.py 284 17 100 9 92%
src/preprocess/load_ct.py 91 6 28 2 93%
src/preprocess/lung_segmentation.py 139 105 36 1 20%
src/preprocess/preprocess_ct.py 78 14 46 14 77%
src/views.py 32 0 10 1 98%
-------------------------------------------------------------------------------------------
TOTAL 2016 724 663 66 62%
interface service:
Name Stmts Miss Branch BrPart Cover
---------------------------------------------------------------
backend/__init__.py 0 0 0 0 100%
backend/api/__init__.py 0 0 0 0 100%
backend/api/serializers.py 65 16 2 1 75%
backend/api/urls.py 28 10 4 0 56%
backend/api/views.py 79 7 3 0 91%
backend/cases/__init__.py 0 0 0 0 100%
backend/cases/enums.py 65 0 18 0 100%
backend/cases/factories.py 18 0 0 0 100%
backend/cases/models.py 84 0 4 0 100%
backend/images/__init__.py 0 0 0 0 100%
backend/images/factories.py 15 0 6 0 100%
backend/images/models.py 84 1 23 0 99%
config/__init__.py 0 0 0 0 100%
config/settings/__init__.py 0 0 0 0 100%
config/settings/base.py 33 2 0 0 94%
config/settings/local.py 12 0 0 0 100%
config/urls.py 10 4 4 1 50%
manage.py 16 6 2 1 61%
---------------------------------------------------------------
TOTAL 509 46 66 3 90%
Thanks!
As discussed in #275, it would be helpful to generate a coverage report of the unit tests in which the total coverage and missing lines can be seen. This PR adds Coverage.py which keeps track of which LOC haven't been tested after running the tests. It generates a
htmlcov
directory which includes the HTML files that should be served afterwards to correctly render the report. Examples of such reports for the current master are here: prediction, interface.To persist this report generated by Travis, the project manager need to set up Travis artifacts.
CLA