Closed atsju closed 8 months ago
dernier code = most recent code? dernier = last?
Autocorrect from phone. Was supposed to be zernike.
I correctes in original comment. Thanks for watching out
This is a very confusing topic. If I understand correctly, after getting the wavefront calculated, DFTF calculates zernike's and then uses those values to remove tilt, coma, defocus from the wavefront. Also the null is subtracted out which just so happens to also be the expected Z8 term in the zernike values.
So the null subtraction should be perfect regardless if the zernike values are correct or not.
But it's really critical to pull out the defocus, tilt, and coma terms accurately!
DFTF calculates 49 zernike terms. There are zernike formulas/shapes/curves/polynomials 1 through 49 and there are zernike "constants" that when multiplied by the shapes and added up fits very very close to your actual wavefront (but since only 49 terms it is much smoother than the actual wavefront).
Constants should be labelled with a C as in C1-C49 and those are the numbers displayed on left side. Very often we accidentally call them Z1-Z49 but Z8 is a waveform. A shape. A curve. C8 is a single number. when you multiply the two you get the best fit curve for spherical aberration.
To calculate zernike terms we fill 49 square matrixes with the zernike shapes (aka zernike polynomials). Call that M. 49 million floating point numbers for a 1000 X 1000 wavefront. 49 mega floats. Then we have the 49 constants in a comparatively tiny matrix. Call that matrix C. Then we have the actual wavefront (1 mega float in this example). Call that W. M*C=W. DFTF calls math library solve() on this mess and "magic happens" to get C values. If you only fill the matrix with M holding Z8 and a single cell for C8, it should work as well but supposedly solve() function works better if you put in all values 1 through 49. Note 49 is a perfect square. 1-16 is probably fine as well. I don't know.
For annular wavefronts, you can get more accurate results if you kind of magnify the zernike polynomials. As the center hole gets bigger and the ring around it gets thinner, you notice the polynomials all tend to look like a straight line with just a bit of curve to them. The solve() function does better with more pronounced curves. So there is a set of annular zernike equations Z1-Z49 customized to "e" which is the portion of the size of the hole. Those should work better in the solve() function and get you much more accurate results. Not doing this can result in somewhat serious errors in the 3 zernike curves that are subtracted (defocus, tilt, coma) (note there's 2 tilt and 2 coma curves). Not subtracting those out properly can add in error which might make you think you need to correct/polish out something that doesn't need to be (and shouldn't be) polished out.
This is my understanding. I could be totally wrong. If so I will update this comment.
Dale implemented annular zernike's but I'm not sure how well it's tested. It's difficult because his C1-C49 values are rotated or flipped versus Michael Peck's values maybe? And Michael Peck I think did things like coma and angle versus X and Y coma maybe? Really not sure of the issue.
I suspect Dale just displays C1-C49. To integrate better in the gui you have to decide what to do about the null which is all in C8. The annular C8 null is different from the normal C8 null because the curve is very different. Do we display the null as an annular C8 null? Or keep it the way it is. The way it is now is fine in that we can subtract off the non-annular null value from the wavefront before displaying. Like we do now. But now the C8 value displayed can't just have the null subtracted. So the way we display C8 really would have to change. To make it less confusing we should probably change the formula for the null as it fits with the annular zernike's. Or do we need to? Should we subtract the null before we process the wavefront through solve() function? Does DFTF already do this or does it subtract the null after solve() is called? I don't know. I'm pretty sure we want to subtract off the null before calculating C1-C49.
The same issue with displaying C8 occurs with the "best fit conic constant" display. Do we want to show that as it pertains to annular zernikes? if we do some people who are used to DFTF with normal mirrors may be very confused.
Another issue - with normal mirrors you can multiply most of the displayed values C1-C49by 2X to get peak to valley from that zernike component and the circular ones (starting with C8 and continuing with secondary SA) by 1.5X to get peak to valley. This works really nicely with the annular zernikes but doesn't work with the current "normal" way of using DFTF when you have a hole in your mirror because the "peak" or "valley" may be in the obscured central hole. This issue would be fixed by using annular zernikes.
There are MANY other places we do stuff with zernikes such as: zernike smoothing feature, simulated wavefronts and igrams, adding/subtracting wavefronts (I think you can subtract with or without the null), filling in "regions", reports, pdf generation, astig graphing. All of these will likely need code changes.
Ouch. Sorry George on first reading it looks like you are a bit confused. You have several things wrong there. I'll start at the bottom. You don't get PV by multiplying the Zernike values by 1.5. I don't know where you get that. Don't even know why that is important to the annular discussion.
The current version of the annular code is incorrect is the bottom line. We need to use Mikes new C routines. The task will be to integrate them into DFTFringe like I did the previous routines. That may involve change the data structures he used or adapting his data structures. I no longer remember exactly what I did.
Yes the Null may need to be computed differently when computing for an annular mirror. That will happen in the mirror config just like it does now. Then be subtracted or added the same way in the same places the other null is used. Mikes code will compute a better set of Zernike values that we with then use just like before.
Just to help you understand how the current code is broken. Simulate a perfect mirror igram. Analyze it to make sure it is perfect. Then draw a central obstruction on the igram of at least 30%. Analyze that and see the SA is no longer the same value. But it should be.
Don't get hung up on naming. Z8 is not by definition a wave form. Z8 in the software is a zernike coefficient for the 8th term of the zernike polynomial as numbered by the numbering method used.
A bit of history. There are several ways to calculate the Zernike coefficients for each term at a point on the surface. Dave Rowe did so by writing an equations for each term. I copied that. You will find it in Zernikesprocess.cpp double Zernike(int n, double X, double Y). You give it the coordinates of the point and the term number and it returns the value. There is also a polar version of it.
However that becomes unmanageable if you want more zernike terms. Mike peck devised a different algorithm to compute zernike coefficients for any number of terms. It computes them once per point on the surface. That makes for big matrices but we live with it. Memory is cheap these days.
So I use Mikes version to compute the annular. I maybe did the same for none annular but I don't remember.
If you multiply the tilt zernike term by 2x you get exactly peak to valley. Same with many of the terms. The circular symmetrical terms like defocus, S.A and 2nd order S.A. you multiply by a number closer to 1.5 (for defocus it is exactly 1.5, for S.A. it is 1.4286 and the remaining are approcimately 1.41). No one really cares about this. Almost no one.
This actually works with Mike's annular zernikes as well! Pretty cool.
You make good points above. I will look to see when you do the null subtraction. To make absolutely as few changes as possible, the way you describe above is the easiest by far.
Yeah I've seen the code to create the array's of zernike polynomials/curves. I've seen the polar method and the x,y method.
Mike's code will work for non annular as well as you just set e to zero. But I suppose it takes more computing time to calculate all the polynomial matrices. And the non-annular code already exists.
For more about PV for each zernike term see https://www.aos-llc.com/wp-content/uploads/AN016_WorkingWithZernikePolynomials.pdf
Most values are 2 or very close to 2 except for the smattering of ones that aren't.
Mike's code is a memory hog. Not easy to implement until we got 64 bit. IIRC It can also be very slow to generate. Most of the time we don't need it since we need so few Zernike terms. However the zernike smoothing makes use of it.
Null subtraction is done in the same place where we remove tilt, coma, and defocus. Mostly in zernikeProcess.cpp null_unwrapped. Search for doNull.
You will notice that in that routine close to the top of it the null is modified if analyzing an annulus. If that is correct perhaps there are other places that need to do that.
Don't want to spam, just mention #30 here to link both issues
Thinking out loud...
I'm a bit unhappy that the null is going to get much smaller in value for annular mirrors. In my mind, mirrors get very difficult around when the null reaches -10 waves. I know someone did up to -15 waves but the point is I have a "feel" for how difficult it gets as the null gets larger. When we subtract the null we can do it using the new or old null. It shouldn't matter (I can check to see if Mike Peck agrees).
Using old null calculation for annular igrams Pros
Cons
Just to be clear. For an annular wavefront, if you use the normal zernike shape multiplied by the normal null and subtract that from the wavefront, you get the same result as if you use annual zernike shape multiplied by the "annular null".
I guess I'm leaning to the way Dale said it above. Have DFTF display and use internally the annular null that works with the annular Z8 curve. It seems like the least code changes at least for the "basic functionality" code where you are loading an igram and processing it to the results tab.
I'm not sure how changing the meaning of the null affects all the many other tools in DFTF. Let's see: "artificial null error margins" feature requires to work with annular null. I think that's it.
I'm strongly leaning to the way Dale suggested. Going all in on having the null be the annular null.
I did not suggest that you use the same null value and that it be the annular null. In fact I said we I think we need to recalculate a different value for annular. The current null is correct for none annular don't change that.
I'm trying to figure out if the Null modification I use for annular is correct in the Null_unwrap routine. I think it is not. I'm working through all the messages on Groups by Mike to sort it out. Stay tuned. If you want to try look at message 34149 where mike states what the null values should be for annular and then for circular. I don't have much time but when I compute the ratio of those numbers I don't get what Null_unwrap computed.
I looked at that message just now and it doesn't have all the needed information for dealing with null. Somewhere Mike posted the formula for the annular null.
I'm pretty sure you remember that DFTF zernike values are sqrt(5) larger than the wikipedia article about zernike polynomials and probably also Mike's R program zernike constants. So that's the sqrt(5) part of message 34149.
Just to make sure we are on the same page. here is the z8 curve for a 90% obscured mirror (a bit extreme, I know). The orange curve is how DFTF populates the z8 matrix now (just the outer 10%) and the blue curve is how "Mike's" annular zernike populates the Z8 matrix before using that matrix to run the solve() function.
Because a parabolic mirror of given F/# and aperture is supposed to have the exact same shape whether or not it is perforated, you have 2 ways to subtract off the null. Doing it the existing DFTF way which is to use the circle zernike curve with the null value from the mirror dialog. Or you can do it the annular way where you use the new null multiplied by the null calculated with a new formula (which I don't know at the moment - Mike published it at some point almost a year ago I think). Both methods give you the exact same null shape to subtract off. But because the annular zernike curve is more extreme, obviously the null value for annular will be a much smaller value to compensate. I could be wrong. It might be that it's not so simple and the parabolic null involves constants in more zernike terms (second order spherical aberration). But I seem to remember Mike said the parabolic null correction is still all in Z8.
Now looking at the graphs you might think "how can that possibly be the same Z8 - just by multiplying one by a constant you can't possibly get the other". Well that's probably because the difference between the two can be found in the defocus term. Which means if you want to use the existing DFTF calculated null, you need to subtract the null off before calling solve(). But if you use the annular null you can do it before or after calling solve(). Again, I'm leaning towards having the mirror configuration dialog display the annular null for perforated mirrors.
By the way, note that the peak to valley of the blue waveform is about 1.43 waves, just like in the circle Z8 curve. Pretty cool. Mathematically "beautiful".
From groups.io 31896 (it is what I used to adjust the null in Null_unwrap() as well. Good.) Here's the formula again to leading order:
C8 = -r4/(24 R3 λ) (1 - ε2)2
so you multiply by the factor at the end.
If you're working with Annular Zernikes to fit the wavefront you should use this formula (or the extended one from message 31132) for the numerical null. If you're using circle Zernikes for an obstructed/annular wavefront you should leave the null alone.
It is hard for me to know just how bad the current Annular algorithm DFTFringe uses is compared to Mike's current code. George it looks like from your current testing it might not be too bad. I don't have the skill to make sure that is the case these days. I know there are so many settings that people (and me can get wrong) that confuse the issue.
Keep up the testing as you have time to make sure it is not way off. I'm happy now knowing that at least the null it uses seems to be good. Problem is that the mirror dialog does not know anything about the obstruction outline on the igram which can be different for each igram. So what to do about that. Perhaps just add a note to the null saying so in the popup. It most know about the obstruction ratio computed from the outline in order to compute the actual obstruction ratio. The obstruction in the mirror config was never meant to be about how the igram is obstruction. However for cassigrains it is closely related.
What is the obstruction for in the mirror dialog?
Okay so you bring up an important point. If one is using annular zernike's and each igram has a different obstruction diameter - even if by one pixel - then they may all have different nulls.
One solution is to save a different null with each wavefront created. Each potentially with a slightly different null.
However, because you mask out parts of the wavefront when calculating the zernike constants (you already do that now), it's okay to use an annular zernike set and an annular null that has a different inner obstruction percentage than that of the current igram.
As long as the inner obstruction diameter is within a few pixels it shouldn't matter much. If the inner outline is larger than expected based on mirror config, you just mask out the missing data. If the inner outline is smaller then you just ignore some of the igram data when calculating the Zernike values.
So I would probably allow the outline to change up to say 3% (of total mirror diameter) larger or smaller than the configuration. And if outside the range then maybe pop up an error. Maybe allow the user to configure this. Also maybe when there is an obstruction in the mirror config, automatically create the initial inner outline based on the outer outline as a starting point.
Or maybe just storing a different null with each wavefront is not too bad?
Ugg. If I do the second solution where I use the mirror config diameter, then the problem is if you change that value you get a different null and it will mess up all the previously calculated wavefronts currently open in the gui.
Anyway, I'm beginning to wonder if this whole feature is moot except for people who are around 90% obstructed and what amateur is going to do that?
It is only Moot if DFTFringe is not computing the correct data by some value that matters. So greater than 1/20 wave or so.
Like I have said I have not found a satisfactory way to verify that it works correctly. Except by comparing it to Mike's code. That comparison is a bit hard since each use different units for some of those values. I thought the current state of DFTFringe was it uses an old version of Mike's code that is not correct. By how much not correct I forget. But I thought enough to be significant. For typical cassigrain mirrors.
You have puzzled over why the null gets smaller for annular apertures. My too. Here is why I think it happens. For a circular aperture Zenikes try to reduce the RMS value over the whole aperture by adjusting various values. When we look at the wave form for an uncorrected mirror it has a high edge and a high center. The fitting will adjust defocus so that has the SA has the lowest RMS value. But it is constrained by needing to also balance the center high area.
With an annular it no longer has to balance anything in the center and can now use a different defocus value and perhaps others to find the fit for the smallest RMS. That results in a smaller null as well.
Perhaps DFTFringe does work well enough but I mistook user errors in setting it up for errors in the software.
I was confused but now it's obvious now why the annular null is smaller. It's because the Z8 curve is bigger. You need to multiply annular Z8 curve by a smaller annular C8 null to get the same result.
If you look at my graph 6 posts ago you can see that the orange curve dips only slightly in the center but the blue one dips a LOT in the center. The difference in that dip explains the difference in nulls.
George. Each wave front stores the outlines both inside and outside. So each wave front has the associated information needed to compute the annular null. Notice in the code I posted online it uses those values to compute it.
Ok, I think it is starting to soak in. Null_unwrap removes the correct annular null but it ignores using the proper tilt and defocus annular zernike values. That is why George is seeing improper tilts and defocus. It probably needs a bit of a rework for anything that the user wants to remove or ignore. That is to use the Annular zernike calculations instead of polar.
Each wave front stores the outlines both inside and outside. So each wave front has the associated information needed to compute the annular null.
I thought that was probably the case. I wasn't 100% sure. I used to know this of course but I forget things.
So that is fantastic. However, the null shown on the mirror config page might be a little bit different from the annular nulls used on each individual wavefront. If you just calculated 10 wavefronts and are about to average them, note that they may all have slightly different annular nulls as the inner outline may move a pixels or two here and there.
I'm trying to remember how the current average function works. I think your wavefront structure has the wavefront with the null already removed. No. That's not it. The wavefronts still have tilt and everything in them. I remember now. Yeah it's all averaged together and then there is a new wavefront and that will have it's own null.
Yeah. The more I think about it, the less the gui has to change for annular zernike's.
Great news - Dale got the annular zernike's working and it's in branch "annulus" on github at the moment. But there are problems with the gui part:
Here is an example of a perfect annular mirror result. Note that strehl is .95 and conic fit is -1 (both good). However the Spherical zernike shows 3.243 waves of error! It should be closer to zero! This is because the non-annular null for this mirror is -7.1758 as you can see in the upper red circled area. But the annular null is -3.93. So...
1) Instead of showing "SANull: -7.17", it should show "annular null: -3.93". Spherical null needs to be fixed as well so that it shows a value close to zero in this case. 2) This is a problem, right? Because there may be 5 wavefronts and depending on the inner outline position they will all have slightly different nulls. The displayed null needs to change as you change wavefronts and the displayed null needs to be stored with each wavefront. 3) If you change the mirror parameters, it should I suppose change the nulll for a wavefront already processed. Right? But not all the other wavefronts. 3b) Unless you click "recompute"? 4) What happens if you merge two wavefronts with "average"? In the existing code it averages the un-nulled wavefronts. We can't do this as each wavefront has a different null. We need to pick one null (from the first wavefront to be averaged) and adjust all the other waves to this null value.
[edit - okay the averages don't have to be un-nulled. You will just get a new null with the new average wavefront. However, I need to make sure my code that fills in some of the center hole with zernike averaged values still is running for annulars - we need to fill a few pixels inwards in case some of the inner circles extend a few pixels towards the center so we don't get crazy results when averaging]
5) I'm nervous about defocus being -2.6 waves in image below yet the simulation used +2 waves of defocus. I expected them to be different but I thought only by a multiplicative factor. Could there be a sine error in the code somehow? Or is there a bug with the polynomial being inverted? Tilt looks good.
If you look at my todo.txt file in the archive you should see things I have listed that probably need work when in annulus zernike mode.
Defocus is one and perhaps very complex because with annular it is no longer isolated but also spread into some of the other terms. I think. We may need to get Mike Peck to explain if that is so.
This was implemented in PR #123 and merged with master in PR #125
closing issue.
I put this here only for developpers tracking purpose.
According to this thread, zernike code needs to be updated to 2022 version https://groups.io/g/Interferometry/topic/103666155
Feel free to add context and information