Most methods should be tested automatically to make sure the methods function as expected under a range of inputs. I quantified the test coverage using the coverage tool. A snapshot of the coverage today shows these results:
Module
statements
missing
excluded
coverage
Total
931
168
0
82%
src/muler/init.py
0
0
0
100%
src/muler/echelle.py
247
78
0
68%
src/muler/hpf.py
157
45
0
71%
src/muler/igrins.py
103
22
0
79%
src/muler/nirspec.py
91
9
0
90%
src/muler/templates/init.py
0
0
0
100%
src/muler/utilities.py
7
0
0
100%
tests/test_hpf.py
105
0
0
100%
tests/test_igrins.py
102
14
0
86%
tests/test_nirspec.py
79
0
0
100%
tests/test_utilities.py
40
0
0
100%
We see that HPF, echelle and IGRINS have coverage in the 68-79% range. We would like to increase the test coverage to the 80-90% range by adding more tests in those places that currently lack them.
There are methods for automating this test coverage. For now, I recommend we just manually check the coverage locally on demand, and try to get most new methods to add a test.
Most methods should be tested automatically to make sure the methods function as expected under a range of inputs. I quantified the test coverage using the coverage tool. A snapshot of the coverage today shows these results:
We see that HPF, echelle and IGRINS have coverage in the 68-79% range. We would like to increase the test coverage to the 80-90% range by adding more tests in those places that currently lack them.
There are methods for automating this test coverage. For now, I recommend we just manually check the coverage locally on demand, and try to get most new methods to add a test.