brandondube / prysm

physical optics: integrated modeling, phase retrieval, segmented systems, polynomials and fitting, sequential raytracing...
https://prysm.readthedocs.io/en/stable/
MIT License
263 stars 45 forks source link

How to create a psf from a pupil with the new (v0.20) style? #38

Closed diplodocuslongus closed 3 years ago

diplodocuslongus commented 3 years ago

Hello,

This is an incredibly promising package, bravo for all the work! I've installed the current development branch (0.19.2.dev129) which I later on discovered is quite different from the stable version: https://prysm.readthedocs.io/en/latest/releases/v0.20.html

This explains why I've got lots of errors when trying to run the notebook Defocus and Contrast Inversion.ipynb in the docs/source/examples folder found in the development branch.

Since I'm new to prysm I decided to go on and stick to the new syntax and changes (which hopefully are here to stay!) and dig into the documentation to try to adapt said example notebook by myself.

Well I've not managed very far actually, besides importing the correct modules and dealing with the siemensstar object, I'm hitting a wall with this part:

    pu = NollZernike(Z4=defocus_values[idx])
    ps = PSF.from_pupil(pu, 4, norm='radiometric')  # pu defaults to diameter of 1, this makes F/4
    mt = MTF.from_psf(ps)

For the pupil, I used zernike.zernike_nm and zernike.noll_to_nm from the prysm.polynomials module to convert from Z4 (defocus) to n,m, as follows (parameters for the grid yet to chose properly but this works):

x, y = make_xy_grid(64, diameter=1)
r, t = cart_to_polar(x, y)
znoll = noll_to_nm(4) #NollZernike(Z4=defocus_values[idx])
pu = zernike_nm(*znoll,r,t)

I can plot the pupil all right. But the first question arises: how can different amounts of defocus be taken into account as in the loop in the example notebook?

Moving on to PSF.from_pupil, the doc says the following about the PSF module:

The PSF module has changed from being a core part of propagation usage to a module purely for computing criteria of PSFs, such as fwhm, centroid, etc. PSF has been removed

So now that prysm.psf is no more about propagation but dimensional specifications of the psf, how can I create a PSF from the pupil just created with zernike_nm?

Also,

MTF was removed, use mtf_from_psf()

And the documentation about mtf_from_psf() explains:

mtf_from_psf(psf, dx=None) Compute the MTF from a given PSF. Parameters psf (prysm.RichData or numpy.ndarray) – object with data property having 2D data containing the psf, or the array itself

I suppose that if I had obtained the psf this part with the new mtf_from_psf would be easy to deal with.

So it seems my issues boil down to my first 2 questions:

Thanks a lot in advance for any hints in this regard!

Ludo

brandondube commented 3 years ago

Hi,

First off - thanks for this issue, it is extremely valuable, and you've obviously done a huge amount of digging on your own. The shortest answer to your question is that this tutorial shows what you are asking about.

The docs are the final(ish) piece of the v0.20 development process and still very incomplete. It has been a very long time to rewrite all of the code and get it all tested again. On a compatibility note, prysm is pre-V1.0 and makes no compatibility guarantees, so I am free to do whatever I want to the API. Arguably, that's better than other programs (say, Falco) that declared v1.0 a long time ago, but still release breaking changes in point releases, and go through several major versions per year. There is no point to a major version if it will be unsupported in three months!

The new design of prysm is indeed here to stay ad infinitum. Six years of different designs and the user experience for them fed into this design. prysm will become a bit more verbose and dissimilar to how a lot of people like their diffraction program written (pass the grid instead of npix, quelle horeur...). It is possible I or someone else will make a prysm-lite that makes a more terse object oriented API for common problems, I don't know. The driving factor in this change is to make it easier to embed and extend prysm. Heavy object-oriented systems (like POPPY) require a ton of work to extend. A data oriented system (prysm v0.20+) is trivial to extend, because the data is the contract and not some object system.

In your specific conversion...

x, y = make_xy_grid(64, diameter=1)
r, t = cart_to_polar(x, y)
znoll = noll_to_nm(4) #NollZernike(Z4=defocus_values[idx])
pu = zernike_nm(*znoll,r,t)

The diameter should be two. The Zernike polynomials are orthogonal on r in [0,1]. r is a radius.

Your pu variable isn't really a "pupil" though -- it's just a particular Zernike polynomial, evaluated on a particular grid. The prysm.propagation.Wavefront.from_amp_and_phase function will turn it into a "pupil" (complex wavefunction, which we know a priori exists in the pupil plane and thus "is" a pupil).

how can different amounts of defocus be taken into account as in the loop in the example notebook?

Keep a copy of z4 = zernike_nm(*znoll, r,t) around and do Wavefront.from_amp_and_phase(..., z4*2) or z4 times any number. In general, prysm.polynomials.sum_of_2d_modes([z4], [123.456]) does what you want, but for a single mode you may as well scale it by itself.

A weakness of the current design is that it is fairly verbose or error prone to sweep defocus with other phase errors. I am aware, I just don't especially have a good design thought through. The way I would do it is to use my oracle knowledge of which mode index is defocus and futz with that:

nms = [polynomials.noll_to_nm(j) for j in range(2,25)] # tilt to some higher order thing
idefocus = nms.index((4,0))

basis_set = list(polynomials.zernike_nm_sequence(nms, r, t)
coefs = np.random.rand(len(nms))
c2 = coefs.copy()
# nominal
phs = polynomials.sum_of_2d_modes(basis_set, coefs)

# defocused
c2[idefocus] = coefs[idefocus] + 2 # or whatever
phs2 = polynomials.sum_of_2d_modes(basis_set, c2)

This is the most computationally efficient way.

diplodocuslongus commented 3 years ago

Thanks a lot for your reply. The lens MTF model tutorial helps a lot. I've made good progresses but first focusing on getting right the PSF, MTF and blurred image of the siemens star for a single value of defocus (defocus sweep will come later). The lack of dedicated time on weekend prevents me from writing a more thorough reply and posting my draft notebook on a gist. I'll stop here for now with a defeat:

from prysm.convolution import conv
blurred = conv(source_img,psf.data)

throws: operands could not be broadcast together with shapes (256,256) (512,512) As I'm writing this I think I've found the problem: I did psf = pup.focus(efl, Q=2).intensity with Q being the oversampling, hence twice as much samples as my image. Family's calling so I'll have to (reluctantly) pursue my quest later.

PS: the more verbose style has the advantage to force the user to understand what is happening at a deeper (more granular) level.

Again, thanks a lot, and what a work you have done!

brandondube commented 3 years ago

I think it is useful to understand at first how prysm works (this document still needs a lot of work...). It's a numerical program and the basic building block is an array of data. While it's true that, in general, everything is built out of equations at some point those equations are better thought of as "algorithms" because it would be ten times harder to express in math than it is to express in code. part of "how prysm works" is thinking about your problem as a series of planes, each plane having a grid on which everything is defined. If you have an object at 256x256 and a PSF at 512x512, you must have made them separately. Prysm doesn't really care if you do that, but it would be easier to do...

psf = pup.focus(...).intensity
obj = objects.siemensstar(psf.r, psf.t, psf.x.max()*0.8)

Taking the max like that is not the most computationally efficient since numpy will scan all 51x512 elements of psf.x to find the max, but if rasterization of the object is not a bottleneck, you should not concern yourself with that.

The Q argument in focus should be renamed at this point. Once upon a time it really was Q because you basically had to have a pupil which completely filled the array (and likely folks will mostly use prysm that way). Nowadays it isn't true in general that the pupil will fill the array, and there are some applications where you wouldn't want that. What Q is really controlling is how much (zero) padding is done around the pupil before the FFT is done, which has the effect of making the sample spacing ("""pixel pitch,""") finer in the PSF plane.

PS: the more verbose style has the advantage to force the user to understand what is happening at a deeper (more granular) level.

prysm is designed for three things, basically:

diplodocuslongus commented 3 years ago

A quick message for now just to signal a draft notebook here; please note:

brandondube commented 3 years ago

Luckily, design to not surprise an expert is not antithetic to (sufficiently) easy for the notice to use. The "oracle" knowledge I meant about Z4 was actually a typo anyway, but the ANSI notation for Zernikes is $Z_n^m$ (subscript n, superscript m). n is the radial order and m the azimuthal. For defocus, n=2 (quadratic) and m=0 (azimuthally invariant).

In your notebook going top to bottom, you've been bit by a common misteaching (IMO) about how to implement phase errors in numerical diffraction. This is an area where prysm's design for speed may be punitive to the notice. Before that, when you do

mask = circle(1,rho) == 0

You could just do mask = ~circle(1,rho), if you wanted. Assigning NaN is the right thing to do for plotting:

phs_plot = zerp4.copy()
phs_plot[mask] = np.nan
plt.imshow(phs_plot)

"phs" is a shorthand for "phase." A nice API for this can probably be added to prysm after a little design work. Assigning NaN and then propagating is because the IEEE rules for floating point numbers make NaN "contagious," and the propagation sprays information from each sample of the pupil into each sample of the PSF, making everything NaN. Prysm does not check that because adding an if np.isnan(pup.data).any() is actually pretty computationally expensive. I am not willing to compromise performance of critical operations for safety in this case, because the user will notice it blew up anyway.

Anyway -- the Zernike polynomials are orthogonal on the unit circle, but they can be computed over any domain. The corners of the array are r=sqrt(2). It's not "illegal' to compute the Zernikes out to there. If you plot them it will look funny/wrong, but it actually has no bearing on the computation because the amplitude is zero in that area. You could set the phase to zero out there instead of NaN if you wanted, for plotting, and then you wouldn't need to "fork" the array with the copy and set NaN. Those steps are not automatic because the Zernike polynomials are not zero outside the unit circle, which is that "do no more than you say you do" principle.

Your MTF bucks expectation because you did an aliased propagation. In coherent imaging when you have the "amplitude PSF" or "coherent PSF," the Nyquist frequency is one half of the incoherent case. When you did psf = pup.focus(efl, Q=1).intensity making an intensity PSF, this doubling of the diffraction cutoff frequency produces aliasing. Q=2 would be critically (Nyquist) sampled. Your PSF will look very different that way, and the result of the convolution will be correct. You'll need the PSF grid to synthesize the star, or to make another at twice the density of the pupil grid (doesn't really matter).

I am glad you are sticking with prysm. No vacuum salesman would say their product doesn't suck, but prysm is pretty slick, IMO. It's about a third the size of most of its competitors (prysm sits at 3,985 lines today, POPPY is 10,890, FALCO is 11,745 even without counting its dependency on PROPER against it, e.g.) despite doing more things than any of them and being so much faster. I personally have found most others extraordinarily difficult to learn, too, because their class systems are very opaque. I once spent more than a day trying to figure out how to place an arbitrary array of phase data into a POPPY model, for example, to learn that you couldn't! That feature has since been added, but there is a simplicity to how prysm is "meant" to be used.

If you flip back in time through the release notes, there used to be a stability tic-tok. v0.19 was stable (few/no breaking changes), so was v0.18. v0.17 was breaking, 16 stable, 15 breaking, etc. Sometimes the breaking changes has been removal of untested and PITA to test features, but it's usually been redesign of the object-oriented API. From v0.20 on, the data oriented API is largely what it is (there is so little to the design, how could it need breaking changes), so I expect a scarcity of breaking changes. The changes are usually glossed over with deprecation warnings, it is just that this particular case is so totale that that is not possible.

diplodocuslongus commented 3 years ago

you've been bit by a common misteaching (IMO) about how to implement phase errors in numerical diffraction.

This is very possible, especially because this misteaching was self-taught. It's very rare someone (besides certain professor) points out what's wrong (or could be improved) in one's way of doing; sometimes an approach works and no question is asked, no problem encountered, so little learnt. With prysm (the new syntax) and your comments (sincerely appreciated), there's a lot to learn. The new syntax calls for a more rigorous approach to the problem. This is best illustrated by the old defocus and contrast inversion notebook and the one (yet to come - working on it) with the new syntax. Honestly the old syntax was more non-expert friendly. But I like the new one for its "put your hands in the engine" nature, maybe a bit like Linux/OSX vs Window$ (Linux user here).

Assigning NaN is the right thing to do for plotting

Yes I had noticed from both my own tests and some of the tutorials.

It's not "illegal' to compute the Zernikes out to there. If you plot them it will look funny/wrong, but it actually has no bearing on the computation because the amplitude is zero in that area. You could set the phase to zero out there instead of NaN if you wanted, for plotting, and then you wouldn't need to "fork" the array with the copy and set NaN.

That's actually what I did (in a different draft) but the plot doesn't look as good as with nan, maybe some colour map manipulation could make it look better (as if it had been NaN-ed).

Your MTF bucks expectation because you did an aliased propagation. [...] You'll need the PSF grid to synthesize the star, or to make another at twice the density of the pupil grid

Points taken.

Q=2 would be critically (Nyquist) sampled. Your PSF will look very different that way, and the result of the convolution will be correct.

I did use Q=2 but the psf looked so strange...

If you flip back in time through the release notes,

I started to, not to find out what broke or was stable but to connect the old and the new syntax, what was the old supposed to do and what does the new is expecting me to do. But ... this does devour time!

Will update the notebook in the coming days.

brandondube commented 3 years ago

On visuals, RdBu is a good colormap for making zero outside the pupil invisible. Additionally, when you plot the PSF, you might try the lanczos interpolation routine instead of bilinear or something like that. Q=2 is critically sampled, but you would also say it's "not aesthetically sampled" (still looks chunky). The lanczos interpolator is "the best one" matplotlib offers. It's also just about the best one we as a species can come up with. I would recommend that whenever you look at any 'image' with matplotlib, you use that one. Larger values of Q, say Q=3 or Q=4 will make PSFs that look smoother, but when you convolve the star with them it will be blurrier (Q is tied to blurriness; a large Q says the diffraction cutoff is far inside the bandpass of the system => the system is very oversampled, and so looks blurry). You will eventually notice that propagation with a large value of Q (say, 12) takes quite a long time.

My opinion on old vs new is that the old syntax was very terse and hid the plumbing from the user. The new syntax doesn't move the needle on how much expertise you need in imaging/diffraction, but you need to know a little bit about doing the plumbing. The benefit to that, to me, is that if you want to become an advanced user or a contributor, the amount of stuff you have to learn to advance in any given area is limited in scope. E.g., if you wanted to contribute some new polynomial code, you need only know about your polynomials and nothing about diffraction. Unfortunately, some tasks like doing a rigorous image simulation require that you know about a fair number of things.

diplodocuslongus commented 3 years ago

For anyone interested, I've updated my drafty notebook here.

I'll close this issue as you have answered all my questions (and much more).

Thank you.