Closed t-brandt closed 3 months ago
Attention: Patch coverage is 8.00000%
with 46 lines
in your changes missing coverage. Please review.
Project coverage is 60.16%. Comparing base (
2f4410f
) to head (ebd929c
).
Files with missing lines | Patch % | Lines |
---|---|---|
jwst/extract_1d/apply_apcorr.py | 11.53% | 23 Missing :warning: |
jwst/extract_1d/extract.py | 4.16% | 23 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I refactored the code slightly so that apcorr.apply() takes use_tabulated as an (optional) argument. Please tell me if you think this approach is better. If so, we can double-check the test results.
I think we should be able to write a unit test to verify that calling tabulate_correction and .apply(use_tabulated=True) gives the same results as .apply(usetabulated=False). It would be best to do this on a data set with a nontrivial aperture correction.
I refactored the code slightly so that apcorr.apply() takes use_tabulated as an (optional) argument. Please tell me if you think this approach is better. If so, we can double-check the test results.
These changes look sensible to me - thanks!
I'll be out next week, but can check back in after if there is still testing work that needs to be done.
Adding a unit test to verify that the tabulated and non-tabulated corrections are the same is harder than I thought. This will require a new dummy data set (since none of the ones there currently have the keys in the data tables). I was hoping it would just be a couple of lines. I could generate a data set and add it to the repository, and then write a test to verify that the answer is the same in the various ways of computing an aperture correction. Should I do this?
Adding a unit test to verify that the tabulated and non-tabulated corrections are the same is harder than I thought. This will require a new dummy data set (since none of the ones there currently have the keys in the data tables). I was hoping it would just be a couple of lines. I could generate a data set and add it to the repository, and then write a test to verify that the answer is the same in the various ways of computing an aperture correction. Should I do this?
I'm not very familiar with the current apcorr tests. Is it possible to just add the keys you need to the existing mock data sets?
The current tests simply check that the interpolating aperture correction function is unity at a random wavelength (i.e. that there is no correction). A proper test, to my mind, for the current change is that the aperture correction applied to a grid of wavelengths is the same whether applied in the old or in the new way. The function that applies the correction has a variety of fields hard-coded in in the form of a FITS table that it assumes as input; such a table is never actually called or referenced in the current checks. So I think it would be a bit of a pain to implement and would require a new dummy reference file. If it is important to do this I can give it a go.
It may be simpler then to cover this in the regression test framework (which should happen automatically) rather than as a unit test.
I see, the current unit tests don't cover any of this functionality yet. I agree it would take some development time to work up appropriate mock test data. I think it would be nice to have, but it's not critical for this change.
I think all comments have been addressed for this PR. Regression tests are started here: https://plwishmaster.stsci.edu:8081/job/RT/job/JWST-Developers-Pull-Requests/1617/
Partially resolves JP-3657
Partially addresses #8559
This PR enables the reuse of aperture correction objects for spectroscopic data with multiple integrations, specifically targeted at BOTS data. Most of the time in a BOTS reduction to x1d files goes into constructing multidimensional interpolating splines for every wavelength and for every integration. Reusing these across integrations saves a factor of ~10 in BOTS runtime. A further factor of maybe 2 or 3, larger if a background is fit, will come from a rewrite of extract1d (hence this PR only partially addresses the JIRA ticket).
Checklist for PR authors (skip items if you don't have permissions or they are not applicable)
CHANGES.rst
within the relevant release section