Closed moustakas closed 2 years ago
Updates in this PR (still WIP, unit tests still not totally fixed):
A long-standing issue with the current template-generating code in desisim.templates
is that if a simulated template did not satisfy the target-specific color-cuts, then the code would issue a non-catastrophic warning and then return a spectrum of zeros. I think that's a defensible choice because it would be misleading to return a mix of spectra which do and do not pass the color-cuts; however, this choice has also caused a myriad of downstream problems (including #507 and #549). So, to address this issue, here I've done two things:
GALAXY
Class (used by the ELG, LRG, and BGS template-makers) to help ensure that a template (which satisfies the color-cuts) will always be generated.We can revisit this issue if the changes above do not resolve the underlying issue, but I've generated hundreds of templates of each flavor and haven't seen a single failure yet. But note that the iterative piece does add a bit extra computing time, but I think it's worth it.
The second long-standing issue I decided to fix was how we handle the velocity dispersion broadening, which made bitwise reproducibility hard to ensure. In this branch I've switched from using desisim.pixelsplines
to scipy.ndimage.gaussian_filter1d
. The upside is that the code is cleaner, faster, and reproducibility (to be demonstrated with some forthcoming unit tests) but the downside is that since I don't do any constant-velocity resampling of the basis templates, the velocity broadening is only approximately correct in the optical range. I don't think this shortcoming is a blocking factor, however.
I'll document some additional fixes that are coming along in this PR next.
@sbailey can I bring the trapezoidal rebinning script in redrock
as a dependence to desisim
(faster) or should I stick with desispec.interpolation.resample_flux
(slower)?
I'd prefer to not bring in a new desisim dependency on redrock, but if it really makes a substantial difference in pragmatic runtime, ok. If that dependency becomes problematic we can copy the code into desisim (or otherwise augment desispec.interpolation to have the faster simpler version from redrock; desispec.interpolation.resample_flux is slower because it does more with ivar weighting, masking, etc.).
@weaverba137 would you be able to help me debug these failing tests? Everything passes on my laptop but I'm worried that I've messed up the test configuration somehow, e.g., import desimodel.fastfiberacceptance
doesn't even work.
Which test failures would that be? The latest push has everything passing. However, I do not see how you're testing against astropy 5.0.
Which test failures would that be? The latest push has everything passing. However, I do not see how you're testing against astropy 5.0.
That's really odd but I'm not going to complain.
Where do I specify (other than the requirements.txt
file) that I want to test against astropy 5.0
(which @sbailey) recommended?
https://github.com/desihub/desisim/pull/559/files#diff-4d7c51b1efe9043e44439a949dfd92e5827321b34082903477fd04876edb7552R4
I'm even more confused now. I thought the whole point of updating desisim was to get it ready for astropy 5.0. If you're not testing against it, then you're not getting ready for it. Your last commit says "test against astropy 5.0 and desimodel 0.17.0", but again, you are not in fact doing that. Why did you use those particular words?
To be specific, astropy 5.0 is currently disabled in the GitHub Actions tests. At some level it doesn't matter what values you put into requirements.txt
, because those are going to be overridden in the GitHub Actions tests anyway.
Also, until moments ago, you may well have been testing against desimodel/0.17.0 code, but the corresponding test branch for the data did not exist. That is fixed now.
I'm even more confused now. I thought the whole point of updating desisim was to get it ready for astropy 5.0. If you're not testing against it, then you're not getting ready for it.
I've been updating desisim because it has been broken for the better part of a year. By no means were my (major) updates related to astropy 5.0 other than serendipitously (I thought that was clear from all my documentation in this PR).
Your last commit says "test against astropy 5.0 and desimodel 0.17.0", but again, you are not in fact doing that. Why did you use those particular words?
I used those words because I had updated the requirements.txt
file and thought that was sufficient (I'm new to Github Actions). I now see the file where the version is specified so I'll update it there.
Also, clearly this issue was not one I was aware of and outside the purview of this PR (but thanks for fixing it)-- https://github.com/desihub/desimodel/issues/155
OK, let's see how that goes.
By the way, you dropped the 4.0.1post1 test, which you should really still be doing, at least until we're well past the everest
era code bases.
Commentary: desisim doesn't work with our current everest-era environment for reasons unrelated to astropy 4.x . If the most straightforward path to fixing it involves only supporting astropy 5.0 that is fine since we are imminent to switching to that and it would not break anything that currently works anyway.
In contrast, desispec et al currently work with both astropy 4.0.x and 5.0 and we'd like to maintain that until we are solidly transitioned into the Fuji-era with astropy 5.0. desisim is a special case because it has been broken for so long that getting it working under any environment is progress.
Any thoughts on why the coverage tests fail and how I can fix this? Otherwise, this PR is ready for review (although I may have some additional small updates / bug fixes as I'm looking at the mock simulations in desitarget
with this branch now, too.)
The test coverage has decreased dramatically! It's rare to see a PR that changes the coverage by that much. Did you add a lot of code that is not tested or did you disable tests of existing code?
Another discrepancy: You're using desimodel/0.17.0 code but desimodel/0.12.0 data. I've gone ahead and updated the CI test so that desimodel data is used consistently.
Thanks. I added many more unit tests and not a lot of new code, so I would have expected the coverage to increase a lot, not the other way around.
OK, so we'll have to compare this branch to master.
If you look at https://coveralls.io/builds/45574713, coverage of desisim/templates.py has decreased by ~ 11%. That's a lot, and it accounts for the bulk of the overall decrease.
And test_templates.py
is mostly commented out. So there isn't much of a mystery here. You must have thought you uncommented the tests, but didn't actually check in the change.
Coverage is back to pre-PR levels, even with a slight increase and all tests are passing, so merge when ready.
[WIP]
In this mishmash PR I'm planning to address #507, #537, #549, and #558. I'll provide details of the changes when I'm done.