Closed keflavich closed 3 years ago
Hi @keflavich - in fact Poppy is units-friendly, and makes extensive usage of astropy.units throughout the code, though some of the docs are out of date and don't reflect that clearly. (Since some of the docs were well written before astropy
itself even existed.)
Try the same function call with wavelength equal to 2e-6
, 2*u.micron
, 2e-6*u.meter
and you will find the same results in all cases. Numbers without units are interpreted as meters, but it's totally fine and supported to supply inputs as quantities with units too.
This applies to many/most of the optical element classes as well: CircularAperture(radius=300*u.centimeter)
is equivalent to CircularAperture(radius=3*u.meter)
is equivalent to CircularAperture(radius=3)
. Image plane elements can be specified in u.arcsec
or other angular units, detector pixel scales can be specified as 0.032*u.arcsec/u.pixel
, and so on.
What particular functionality are you looking for with units and with modeling? FWIW we have already worked to make sure that output PSFs can be generated consistent with the photutils
ePSF modeling framework; see the gridded library code in WebbPSF.
That said, the output PSF data arrays do not have units associated with them (i.e. the psf
HDUList .data
attributes are just plain ndarrays, not Quantities). At one point I benchmarked doing the optical calculations directly on Quantity arrays but found a significant slowdown compared to calculations on pure ndarrays. So for performance reasons internally in the propagation calculation code, Poppy casts quantities to standard units (meters, arcsec) and pulls out the bare values for use in the FFT/MFT calculations.
Huh, thanks. I think I had put in a wrong unit, then, and misinterpreted the error message as 'units aren't supported' - I put in arcsec for pixel scale, instead of arcsec/pixel.
Thanks for pointing me in the right directions. I'll check out the WebbPSF code. That sounds like what I need, but I'm unsure now about performance - I'm modeling large numbers of fields each containing ~10^5 stars. I gather the ePSF framework builds an oversampled, empirical PSF and then samples from that onto your grid? That might be fast enough.
The lack of units on outputs makes sense for the performance reasons you mentioned.
Thanks! I may come back here with further questions, but you might consider this issue instead as a request that the documentation be refactored to use astropy units where possible
That sounds like what I need, but I'm unsure now about performance - I'm modeling large numbers of fields each containing ~10^5 stars. I gather the ePSF framework builds an oversampled, empirical PSF and then samples from that onto your grid? That might be fast enough.
Yes, exactly. I'll direct you to my colleagues @shanosborne and @larrybradley who can help with that. And see the photutils docs of course. Here's a worked example of doing this for a simulated star field for JWST NIRCam images: https://spacetelescope.github.io/notebooks/notebooks/JWST/NIRCam/PSF_phot_01/GriddedPSFModel_simulation_and_photometry.html
The performance with ePSFs and photutils modeling for photometry or astrometry should be a lot better than trying to re-run a Fourier optics PSF calculation for each point, that's for sure.
Happy to help more. And please feel free to keep providing more feedback and suggestions. I'll take a look and see if I can make that error message clearer about pixel scale units for detectors, for instance.
Happy to help more. And please feel free to keep providing more feedback and suggestions. I'll take a look and see if I can make that error message clearer about pixel scale units for detectors, for instance.
The error there is probably clear and I was just being hasty and making bad assumptions.
You use units, and most places it looks like things make sense. When I add a detector (osys.add_detector), I get different answers depending on the units used
So, for example, from the documentation, if I follow:
"Using ZernikeWFE to introduce astigmatism in a PSF"
and pass in 0.01 *u.arcsec/u.pix
I get a plot that looks very similar to the documentation
if I pass in 48.5 *u.nrad/u.pix
I get a very different picture although it should be equivalent
If I call with (48.5*u.nrad/u.pix).to(u.arcsec/u.pix)
I'm back to the original look
For calc_psf
(same example)
If I print(psf[0].header)
It reports the wavelength as 460 meters, because I passed it in as 460*u.nm
If I call it with (460*u.nm).to(u.m)
, then I see the expected value in the header
I suspect in both of these cases, the code is checking to see if the value is convertible and then just strips the units rather than converting and stripping
Thanks much @mcbeth, those do sound like bugs. @shanosborne or I will take a look, but likely not until next week or so.
@mcbeth - Happy new year. It's been a while but I finally got around to looking further into the issue you reported with detectors when using pixel scales in other units such as u.nrad/u.pixel
.
It turns out that the PSF computation itself was handling the units correctly - but the display functions on the other hand were not appropriately handling the different pixel scale units. I've just implemented a fix for that. I also made some other related improvements to the display functions, to allow specifying PSF or wavefront display using any desired angular unit, not just arc seconds, for the X and Y axes scales. That's in the new version of PR #378 that I just pushed. Cheers.
Addressed through #374 and #378
I'd like to use
poppy
to make synthetic PSFs to use in modeling withastropy.models
, so it would be nice if they played well together. Right now, afaict, poppy is notunit
-friendly - is refactoring to include units in the works? Notably, some of the examples specify units ambiguously (e.g., https://github.com/spacetelescope/poppy/blame/develop/docs/examples.rst#L32 says "microns" but the value is in meters).Are there any plans to incorporate units and/or modeling into poppy?