Closed rmjarvis closed 6 years ago
Hi all, I wanted to give a brief report of some of the things I've been investigating for this issue and some of the surprises/problems that have emerged.
The second kick profile derives from a truncated von Karman phase fluctuation power spectrum, which is related to a Kolmogorov phase power spectrum but generalized by 1) the presence of a finite outer scale (the von Karman bit), and 2) a low-k cutoff (the truncation bit). In the quick-and-dirty version of the SecondKick code I had previously been working on, I also included a hard-coded convolution by an obscured Airy profile (following the example of the source I was looking at).
When trying to clean up the existing code, I thought I'd try out implementing the VonKarman
profile by itself first, which I thought could be a useful GSObject
on its own as a more realistic atmospheric profile than our existing Kolmogorov
.
What I've discovered is that the Fourier-space profile of the von Karman object does not approach 0 as k -> infinity; it instead approaches a positive nonzero constant. For some relatively innocent-looking combinations of (wavelength, r0, L0), in fact, this constant is greater than the default gsparams->maxk_threshold
= 1e-3, which means I can't assign a finite maxk. It also makes the surface brightness at the origin infinite (b/c the Hankel transform of a constant is a delta function), which complicates constructing a OneDimensionalDeviate
for photon-shooting (and generally makes evaluating xValue difficult). The problems get worse for the second-kick-without-airy profile as the asymptotic constant becomes even larger when you add the low-k cutoff.
I suppose assigning maxk = infinity may be okay, as whatever you end up convolving by will end up asserting its own maxk when drawing in fft mode. I'm a bit more concerned about the radial profile for photon-shooting though. My thought there is to decompose the Fourier profile into constant-piece + piece-that-tends-to-zero. Shooting the appropriate fraction of delta function photons from constant-piece should be straightforward. Shooting the remain photons seems like it should mirror what's done in other SBProfiles, with the one additional wrinkle that the piece-that-tends-to-zero's Hankel profile can dip negative before approaching zero. (This feels a little weird to me, but may be okay? Do any of our other radial profiles ever have negative values in their Hankel transforms?)
The other option is to just give up on an independent vonKarman and SecondKick altogether. For the SecondKick in particular, the current working implementation that includes a hard-coded convolution by an Airy profile doesn't have the above problems with maxk and photon-shooting. It also can't take advantage of some of the more advanced things in OpticalPSF though, like struts, since the Airy is permanently built in.
Thoughts?
I don't understand the maxk thing.
I thought a von Karman profile was defined as F(k) = A (k^2 + 1/L0^2)^-11/6. As k->inf, F->0. Why are you finding it tend to a constant?
That's the power spectrum of the phase fluctuations.
The Fourier transform of the surface brightness profile is exp(-0.5 D( lam k/(2 pi))), where D(r) is
(and f0 is 1/L0). (See this paywalled link.) The k_nu Bessel fn on the last line goes like exp(-r) at large r, so both D and the Fourier profile tend to constants.
Well, that constant term doesn't make a whole lot of sense. Maybe the math works out that way, but that means there is a delta function sitting in the center of the PSF. We could just define a modified von Karman that is a real von Karman minus the delta function. So D(r) would just have the second term in the brackets without the Gamma(5/6)/2^1/6 term.
Is there any reason we expect the real atmosphere to actually produce a nice Kolmogorov-y thing plus a faint delta function at the center?
Actually, that still wouldn't work. Then D(r) would go to zero, but SB(k) would still go to a constant. I guess you need to subtract off the exp(-0.5 D_limit) from SB(k) rather subtract D_limit from D(r).
Is there any reason we expect the real atmosphere to actually produce a nice Kolmogorov-y thing plus a faint delta function at the center?
Seems unlikely, but I guess nobody could probably tell in reality. For one, this is in the infinite exposure limit. For another, real telescopes always cut off power at maxk = 2 pi D / lambda, so aren't capable of producing delta functions.
That said, I think if we were to make a VonKarman profile, given how pervasive the vK power spectrum is, I'd be inclined to just follow the math, even when it's wonky. It'll get sorted out in the next convolution. There's always an Airy function or something like it waiting to cut the power.
Here's a plot of the delta-fn amplitude vs (r0/L0). The horizontal line is at 1e-3, i.e., the default maxk_threshold. A typical value for r0/L0 might be something like 0.15/25 =0.006, in which case the delta amplitude is off the chart at 10^(-190). The typical range for L0 is usually specified as 10m to 100m, so I was hoping to get it to work down at 5m. If the seeing is also phenomenal, say 0.3 arcsec at 500nm, and we're observing in y-band, then I get r0/L0 = 0.15 and then the delta amplitude is 0.14. If the seeing is phenomenal, we're in y-band, but I leave L0=25, then r0/L0= 0.03 and the delta-amplitude is 5e-13. So I guess we're really almost always way to the left on this plot, and it's just the one corner case that has me bothered...
This is only for VonKarman, though. SecondKick w/o Airy I think may still be problematic for reasonable lam, r0, L0, kcrit...
There's always an Airy function or something like it waiting to cut the power.
I think you're being too optimistic on the part of the user. I certainly have used Kolmogorov PSFs without bothering with the Airy part. I'm sure some users would try to use a VonKarman PSF all by itself, and they could easily choose values for things that get into the problematic corner case.
So I guess I'd still rather either subtract it off or maybe just artificially truncate it where the non-delta function part drops below maxk_threshold. That way we can be sure the code is stable for photon shooting and such.
I think you're being too optimistic on the part of the user. I certainly have used Kolmogorov PSFs without bothering with the Airy part.
Hmm... Maybe. Even in that case you'd be saved by the pixel convolution.
I don't think silent truncation is a good idea though. If the user wanders into the problematic region where the delta is 14% of the total flux, then ignoring it entirely leads to a very different looking encircled energy function. We could potentially raise an exception or at least a warning in this case I suppose.
I don't think silent truncation is a good idea though.
+1
I've been thinking this over... when you posted your first message i thought OK, just convolve with an Airy and that kills the large-k modes and the composite object has a totally sensible max_k
. Even just a pixel response will help, though not as much as an Airy. So I feel like you're fine with warning if the delta would be >some threshold value in the encircled flux. If you want to be adventurous you could also have a kwarg specifying how strict to be, i.e., if strict=True
then getting into that regime raises an exception while if strict=False
it's just a warning, or something like that.
OK. I still think the delta function is a spurious, unphysical implication of the math, but I'm fine with this plan.
but I'm fine with this plan.
I didn't get what the plan actually was (several options above).
Josh will implement a VonKarman profile that can be used as a component of the SecondKick bit.
If the user gets into a case where this would result in a significant delta function component (i.e. where the asymptotic F(k) as k->inf > maxk_threshold) then it will raise a warning that this will probably not be usable by itself or be photon shootable, but should be fine if convolved with an Airy or other truncating-in-k-space component.
Perhaps allow the user to suppress this warning with a warn=False kwarg or something like that. (I think we have something like this elsewhere in GalSim, so look around and try to match the API.)
I think I've got a reasonably well-tested implementation of the VonKarman
profile now. @rmjarvis , @rmandelb , could you take a quick look at this part (on the secondkick
branch) before I head back in to get the SecondKick bit working? (I.e., ignore the second kick bits, just the VonKarman bits are worth looking at right now.) Feel free to tell me to make the vK-without-2k code a proper PR if you think that would be more efficient.
I've been thinking today a bit about the API for using the second kick. It would be nice if this was completely transparent, i.e., if the second kick automatically got applied if the user was drawing through a phase screen using photon-shooting, and automatically didn't get applied if the user was drawing using DFT. The problem I'm facing is that these two situations not only require second kick or no second kick, they also require generating phase screens with or without the high-k turbulence included. I could delay constructing the screens I suppose, but even in that case one wouldn't be able to easily switch back and forth between DFT and photon-shooting using the same atmosphere object.
So I think the solution may be to just require the user to explicitly create a screen with or without high-k turbulence, attempt to infer from the screen whether or not a second kick should be added to the PSF (although, allow the user to override this inference), and then issue a (suppressible) warning if the user then tries to draw using the inappropriate method. I.e., warn if the user tries to draw a second-kick-included PSF using DFT, or a second-kick-excluded PSF using photon-shooting. Any thoughts?
Certainly most use cases would only do one or the other, so I don't think we need to make it super easy to do both with the same Atmosphere object.
Maybe delay making the screens until you know whether or not to include the high-k part, and then issue a warning if they then go on to also do the other method.
I think that may work. We would also have to decide what to do if the user just asked for the wavefront
or wavefront_gradient
before the high-k-or-not decision was made. I think in that case instantiating the screen including the high-k turbulence is probably the right thing to do.
Also, I think this might break or at least complicate the immutability of AtmosphericScreen
. I can't think of any reason why that would be a problem though (and PhaseScreenList
is already mutable, so I don't think the immutability propagates to anything else...).
Right. I guess there should be an internal state _includes_high_k
, which could be None if still undefined, and then True or False once it's determined. So this would be a mutation that happens once you draw it once.
I guess it makes the most sense to have __eq__
include testing this value. Two objects that are equal should behave equivalently, and the three possible internal states lead to distinct behavior. I'm ok with this mutability.
I was wrong. Making AtmosphericScreen
unhashable means that PhaseScreenPSF
also has to be unhashable. I got around the unhashability of PhaseScreenList
before by using only the contents of PhaseScreenList
, and not PSL itself when creating the hash for PhaseScreenPSF
. But if the subitems are now also unhashable, then that won't work. Grrr... Any thoughts on whether PSFs should always be hashable/immutable?
Okay, I thought about this some more while getting lunch. The mutability problem only exists before we choose whether or not to include the high-k turbulence. After that decision, everything should be immutable/hashable. So maybe the answer is to write a hash function that raises an exception if that decision hasn't been made yet, but otherwise uses the value of the decision as part of its hash value. This feels a little devious, but less so than anything else I've thought of so far.
I guess normally the main use of a hash is to let the object be a dictionary key. names[psf] = 'some name for this object'
or something. (Seems a little backwards of good design in this case, but there's probably some real use case where this would be useful.)
So we could either do as you suggest and consider an Atmosphere unhashable if it's still in the undefined state. Or we could just write the hash in the straightforward way and point out to the user that if they care about the hash for such purposes, they need to either do all hashing operations prior to drawing or all after drawing, since the drawing step will change the hash.
I was going to suggest that the hash just be the same for all three cases (i.e. don't include the internal state in the hash), but I don't think that actually solves anything in the case of it being used as a dict key, since it has to go back to an equality test anyway, and I do think we want these to test as not equal.
I went ahead and implemented the undefined state version: 9cb2c9d.
It required a bit of hacking to do_pickle
since Hashable thinks AtmosphericScreen is always hashable, instead of just sometimes. (I think it might be possible to fix that with some metaclass & subclasshook hacking, but I didn't want to go down that rabbit hole for now.)
I read up on hashing in Python. This article was particularly interesting. Anyway, I think it would actually work fairly well to have the hash not use the internal state, so it stays the same even through the PSF being drawn, even though before and after don't compare as equal.
In particular, if the user uses the psf as a key into a dictionary, then draws it (so the internal state changes), then accesses the dictionary with the same object that they used as the key, it would still work (since both the dict's copy and the user's copy are actually the same object, so they still compare as equal).
The thing that wouldn't work is to use the psf as a dict key, draw with it, and then make a brand new psf object with all the same parameters and access the dict with that, since then, even though the hashes are equal, the objects don't compare as equal.
I think this is ok. The object that is the key in the dict is no longer equal to the thing the user is using to access it, so I think it's ok to give a KeyError there. Plus, this must be an extremely unlikely thing for the user to want to do, so I'm sure it will never turn up. The above use case is already rare enough, although I guess plausible, so I think that's the more important case to enable.
Here's the transcript from a Jupyter session I was playing around with to understand this.
class PSF(object):
def __init__(self, value):
self.value = value
self._high_k = None
def drawShoot(self):
assert self._high_k != True
self._high_k = False
def drawFFT(self):
assert self._high_k != False
self._high_k = True
def __hash__(self):
return hash(('PSF', self.value))
def __eq__(self, other):
return isinstance(other, PSF) and self.value == other.value and self._high_k == other._high_k
psf1 = PSF(1)
psf2 = PSF(1)
print('==',psf1 == psf2) # True
print('hash ==',hash(psf1) == hash(psf2)) # True
== True
hash == True
psf1.drawShoot()
print('==',psf1 == psf2) # False
print('hash ==',hash(psf1) == hash(psf2)) # True
== False
hash == True
d = dict()
d[psf1] = 'psf1' # Put in a key with internal state set for shoot.
print('d[psf1] = ',d[psf1])
print('d[psf2] = ',d[psf2]) # psf2 is still undefined, so this doesn't work.
d[psf1] = psf1
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-3-689271a2efa8> in <module>()
2 d[psf1] = 'psf1' # Put in a key with internal state set for shoot.
3 print('d[psf1] = ',d[psf1])
----> 4 print('d[psf2] = ',d[psf2]) # psf2 is still undefined, so this doesn't work.
KeyError: <__main__.PSF object at 0x110dc1ad0>
d[psf2] = 'psf2' # Put in a key with internal state undefined.
print('d[psf1] = ',d[psf1]) # This still finds the shoot psf
print('d[psf2] = ',d[psf2]) # This finds the undefined psf
d[psf1] = psf1
d[psf2] = psf2
psf2.drawFFT() # Change psf2 from undefined to fft
print('d[psf1] = ',d[psf1]) # This still works.
print('d[psf2] = ',d[psf2]) # This still works.
d[psf1] = psf1
d[psf2] = psf2
psf3 = PSF(1)
print('d[psf3] = ',d[psf3]) # Doesn't work, since no longer any undefined psf in the dict.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-6-b9d6a9924dd4> in <module>()
1 psf3 = PSF(1)
----> 2 print('d[psf3] = ',d[psf3]) # Doesn't work, since no longer any undefined psf in the dict.
KeyError: <__main__.PSF object at 0x110dd48d0>
psf3.drawShoot()
print('d[psf3] = ',d[psf3]) # This now finds psf1
d[psf3] = psf1
Interesting, thanks. I am rather fond of deliberately ignoring advice in the python docs. I'll give this a try.
This is a follow-up to issue #824 where we realized that the second kick seems to be important for getting good agreement between the geometric optics approximation and the full fourier treatment.