Closed chaithyagr closed 4 years ago
Broadly, the tests needed are: 1) tests on dictionary learning: check for convergence 2) tests for parallel MRI : test with 2 channels and sensitivity maps such that is a step 3) test on reweighting
We have a test for all reconstructors in place now. We just need to test dictionary learning, but that is not doable given how slow it is. @zaccharieramzi can we close this? We are at nearly 78 percent coverage now.
I think there are some bits we can still increase like mri/operators/linear/utils.py
or mri/operators/fourier/non_cartesian.py
(based on https://travis-ci.org/CEA-COSMIC/pysap-mri/jobs/643888955), but yes I think we can release anyway.
Linear utils can be increased by a test for dictionary learning (I am concerned about the time still)
The non_cartesian case cannot increase as this is coverage for GPU NUFFT which cannot be tested. However we do have a local version to test it.
Both your comments make total sense, closing this then.
Currently we have many new features entering, for each of which we have an equivalent test. However, we still are at around 60 percent for coverage.
To ensure stable release, it would make sense to have tests that at least check the sanity for most of the codes. Although we have issue #35 which kind off tackles this, but the examples currently dont add to coverage, and this is something to be explored still.
Here is the current state of code coverage in descending order:
The ones highlighted in my opinion needs to have higher coverage.