Closed redeboer closed 9 months ago
The tests are slow since some of them are not really 'unit' test but rather run a full analysis. I think we should keep those extended tests.
Changing the way the coverity scan is triggered is fine for me but the coverity scan FAQ says:
It's probably overkill to run static analysis on each and every commit of your project. To increase availability of the free service to more projects, the addon is designed by default to run analysis on a per-branch basis. We recommend you create a branch named coverity_scan, which you can merge into whenever you would like to trigger analysis. See the FAQ for information about build submission frequency.
The tests are slow since some of them are not really 'unit' test but rather run a full analysis. I think we should keep those extended tests.
That is true, it's also fine to run those on travis. Maybe it's an idea though to mark them and filter them out if you run tests locally (like, "non-unittest"), I guess comparable to pytest.mark.slow
.
Changing the way the coverity scan is triggered is fine for me but the coverity scan FAQ says:
It's probably overkill to run static analysis on each and every commit of your project. To increase availability of the free service to more projects, the addon is designed by default to run analysis on a per-branch basis. We recommend you create a branch named coverity_scan, which you can merge into whenever you would like to trigger analysis. See the FAQ for information about build submission frequency.
I understand what you mean. Right now its just too rare, because someone has to manually trigger this and that is just very low frequent. Anyways we wont run it on every commit but on every PR or merge to the master, which is mostly a collection of commits. We can think about it, if we want to run it on PRs. Since they always change quite a bit until they are accepted, it might be useful to only run it for commits on the master. Although then you lose the feature of finding bad code before you merge it into the upstream.
Regarding the unit tests. Ideally we would not need that many "full analysis" tests if we had more unit tests. Running one full analysis test that takes a few seconds would be ok for me, but I do have to admit that especially some of my python tests take way too long. There are also a few tests here in ComPWA that we could optimize though. I think we just have to slowly keep trying to get more unit in our testing, piece by piece.
Anyways we wont run it on every commit but on every PR or merge to the master, which is mostly a collection of commits.
I'm not sure, because since #288 (which brings back codecov) a codecov report is generated for each new push to a PR.
Ideally we would not need that many "full analysis" tests if we had more unit tests.
For a physics framework, we should have some full analysis tests/benchmarks, it's just that they shouldn't have to be run by ctest for basic, local continuity testing (and I would even say they shouldn't be taken in to account in coverage). For that we need good unit tests.
I think we just have to slowly keep trying to get more unit in our testing, piece by piece.
Yes, this is just something to keep in mind, in that regard the estimated effort (2w-1m) is not a reliable measure. At least the codecov reports serve as a constant reminder.
I feel like we are miscommunicating again...
I'm not sure, because since #288 (which brings back codecov) a codecov report is generated for each new push to a PR.
We are talking about the coverity scan, and like I stated just below that, we can discuss if we want to enable coverity scan for the PRs (I'm aware of the mechanics). Also if you push while a PR is running travis CI should cancel the previous PR run.
For a physics framework, we should have some full analysis tests/benchmarks, it's just that they shouldn't have to be run by ctest for basic, local continuity testing (and I would even say they shouldn't be taken in to account in coverage). For that we need good unit tests.
Ok, I'm a bit confused by that standpoint. I assume you like to run a full analysis test by hand once in a while (which is actually what we have right now -> Dalitz Analysis Example)? Generally I oppose such practices though for obvious reasons. I think it is possible to deliver a robust framework with just unit tests and maybe a few inter-module tests. Btw our current "analysis test" runs in roughly 5 secs, which I find acceptable.
Our code coverage has dropped below 40% and unit tests are slow.
Some ideas:
ctest
would just take less than a minute, it's more attractive to continually test your additions to the code.[Estimated effort: 2w-1m]
[Edit] Things that need to be fixed asap:
coverity_scan
branch and make coverity run on master commits and PRstravis.yml
configuration[Estimated effort: 2h - 1d]