Closed jerbrown-optics closed 3 years ago
Hi,
Thanks for the bug report. Do you have in mind v020-dev, or master?
A reproducible example from v020-dev:
from prysm import coordinates, polynomials, geometry, util
x, y = coordinates.make_xy_grid(1000, diameter=2)
r, t = coordinates.cart_to_polar(x, y)
mask = geometry.circle(1, r)
z4 = polynomials.zernike_nm(2, 0, r, t, norm=False)
z11 = polynomials.zernike_nm(4, 0, r, t, norm=False)
z4[~mask]=np.nan
z11[~mask]=np.nan
1/util.rms(z4)
>> 1.7321632382089933
np.sqrt(3) # n+1
>> 1.7320508075688772
polynomials.zernike_norm(2,0)
>> 1.7320508075688772
1/util.rms(z11)
>> 2.2363566714874206
np.sqrt(5) # n+1
>> 2.23606797749979
polynomials.zernike_norm(4,0)
>> 2.23606797749979
There seems to be an abundance of agreement on v020-dev
I was working with master. Digging a little deeper, I confirmed that the (n, m) indexing being used is correct, leading to the normalizations being used also being correct.
Sorry about that!
Great, no problem. FWIW, as you can see- very little of the v0.19 API's calling signature will survive the 19->20 transition.
After a little digging, I discovered two things.
First, FringeZernike calls appear to affect each other, so I had to run this code multiple times with only one call active at a time.
Secondly, there does appear to be a normalization issue between the PV and RMS normalizations, but only for the spherical terms, and it doesn't happen for 16-term zernike fits, but does happen for 36. In the Z36 fit, the Z9 coefficient came out with a value of 0.010251 wv for the un-normalized, and a value of 0.011884 wv for the normalized. Theory says the normalized should be 0.00458 wv.
So, it looks like there's something peculiar going on.
from prysm import Interferogram, FringeZernike, sample_files, zernikefit
p = sample_files('dat') # sample Zygo .dat file, will be downloaded on demand and saved locally
i = Interferogram.from_zygo_dat(p)
i.change_z_unit('waves')
i.crop().mask('circle', 40).crop() # first crop crops bounding box to non-nan data. Then mask is applied
i.remove_piston_tiptilt()
# coefficients_rms_16 = zernikefit(i.phase, terms=16, norm=True, map_='Fringe')
# fz_rms_16 = FringeZernike(coefficients_rms_16, dia=i.diameter, z_unit=i.z_unit, norm=True)
# print(fz_rms_16.coefs[9])
# del fz_rms_16
# coefficients_pv_16 = zernikefit(i.phase.copy(), terms=16, norm=False, map_='Fringe')
# fz_pv_16 = FringeZernike(coefficients_pv_16, dia=i.diameter, z_unit=i.z_unit, norm=False)
# print(fz_pv_16.coefs[9])
# del fz_pv_16
coefficients_rms_36 = zernikefit(i.phase.copy(), terms=36, norm=True, map_='Fringe')
fz_rms_36 = FringeZernike(coefficients_rms_36, dia=i.diameter, z_unit=i.z_unit, norm=True)
print(fz_rms_36.coefs[9])
del fz_rms_36
# coefficients_pv_36 = zernikefit(i.phase, terms=36, norm=False, map_='Fringe')
# fz_pv_36 = FringeZernike(coefficients_pv_36, dia=i.diameter, z_unit=i.z_unit, norm=False)
# print(fz_pv_36.coefs[9])
# del fz_pv_36
When I run your sequence, I get the following sequence of values:
In [5]: print(fz_pv_16.coefs[9])
0.009133
...: print(fz_rms_36.coefs[9])
0.005905
In [9]: print(fz_pv_36.coefs[9])
0.005905
I see no interaction between RMS and PV normalization ("memory" of the fit code to what it did before)
The 16-term fit and 36-term fit do not produce the same value, but that is pretty intelligible as non orthogonality of the Zernike polynomials over the true domain, which isn't a circle in the data sampled:
The dropout is near the "zone" where primary and tertiary spherical have their greatest slopes
If you crop even smaller, the Zernike polynomials are not (quite) orthogonal over discrete representations of circles, and with a small number of samples the ""rounding error"" will break the orthogonality, and you can get a bit of crosstalk. prysm just does a least-squares fit, not Gramm-Schmidt, so it is susceptible to that class of error.
Right, the difference in fits between 16 and 36 values is reasonable, given the limits of numerically fitting Zernikes. That's an expected behavior.
The main issues I'm seeing are in the relationship between the fz_rms_36 and fz_pv_36 coef. In your run, they're identical when they shouldn't be. This is the first behavior I saw. When I commented out the fz_rms_36 call and ran the script, the fz_pv_36 value changed. That's the evidence of interaction that I was talking about.
The second issue is that even when running the calls separately to get the non-interacting values, they don't differ by a factor of sqrt(5).
ah, I see - good catch. Without looking at the code, I can probably pinpoint the problem. Between something like v0.8 (been many years, I forget when I first did this) and v0.19, a cache was implemented that stores the evaluation of Zernike polynomials. The cache is keyed by array sizes, which is not amenable to generalized fitting, so I added some spaghetti that can short circuit with different keys, I think the function is something like ZCacheMN._gridcache_bypass()
, and there is another function to delete an entry (the cache is still used to accelerate the recurrence used to compute the Zernikes). There is probably a hardcoded normalization parameter somewhere. That has been a source of several bugs over time.
If you want to contribute a fix for a v0.19.2, I'll gladly accept a PR. I don't really have any first party support planned for v0.19; v0.20 has been in progress for months already and has a low degree of overlap in the source code (little will survive the transition untouched). v0.19.2 commits are something of a dead end / not mergeable into v020-dev.
The polynomial module in v020 is complete, and some decent portion of the interferogram module is working (but not all tests pass yet). If you have an immediate need you could try those. The 020 release notes in the docs cover all of the changes made thus far.
Sounds good, and yeah, my suspicion was that it was somewhere in the caching. It's an elegant solution, but that's a danger with elegant solutions. I went more brute force on my end, and created a saved pupil file that I rescale all of my OPD maps to, but I'm not creating distributable code, so it's less constrained.
I was using prysm to validate my own zernike fitting, which is how I ran into the bugs; I was tracking down the discrepancies. I was alerting you to the issue in case it wasn't a known one, but if that code is going to be obsolesced anyway, I wouldn't say it's a high priority.
Nice work on prysm, it's a nifty package!
For what it's worth, there are no caches anymore in v020-dev. They are all deferred to the user level, with a few helper functions to make that easier. It's enabled a lot of the code to be removed (~30% fewer lines, with additional features like rich segmented modeling) without sacrificing speed. Actually, without all the churn in class member and dict access, the Zernike code is ~20% faster, overall, too. And I got the 2DQ polynomials working, and a unified interface for polynomials and fitting.
Hello,
I discovered an apparent issue in the Zernike RMS normalization for the m = 0 terms.
The source material (http://wp.optics.arizona.edu/jcwyant/wp-content/uploads/sites/13/2016/08/ZernikePolynomialsForTheWeb.pdf) has an RMS normalization table on page 28 showing the correct factors.
The normalization formula used in prysm.zernike.zernike_norm() is correct (sqrt(2*(n + 1)/(1 + kronecker(m, 0))), but there's an obscure difference between Wyant's (n, m) indices and that of other systems: for the m = 0 (purely radial) terms, his radial indices n are half that of other systems. This leads to the normalization factors generated in the code being off for the spherical aberration terms. This can be checked by generating some m = 0 values and seeing that they disagree from Wyant's table.
See pg. 17 here: https://www.iap.uni-jena.de/iapmedia/de/Lecture/Imaging+and+aberration+theory1554069600/IAT+18_Imaging+and+Aberration+Theory+Lecture+12+Zernike+polynomials.pdf