mperrin / poppy

Physical Optics Propagation in Python
BSD 3-Clause "New" or "Revised" License
177 stars 41 forks source link

Bug in QuadraticPhase? #252

Open DaPhil opened 6 years ago

DaPhil commented 6 years ago

Hi, I just discovered something I am pretty sure is a bug. Although I tried to dig deep in the source code, I just cannot see where it is coming from. Try the following:

import poppy
import astropy.units as u

wf = poppy.FresnelWavefront(beam_radius=1.0*u.m,
                            wavelength=1.0*u.m,
                            units=u.m,
                            npix=128,
                            oversample=1)
wf *= poppy.QuadraticLens(f_lens=-10*u.m)

and inspect the value of the radius of curvature. I did this by printing to the console the property z in the QuadPhase get_phasor function:

def get_phasor(self, wave):
        """ return complex phasor for the quadratic phase

        Parameters
        ----------
        wave : obj
            a Fresnel Wavefront object
        """
        print(self.z)
       ............

In the above code, I get printed out -10 m (instead of a positive value). Now change the units for the wavelength to something different, e.g. mm or so when initializing the wavefront. The output reads 986960.4401010844 m.

I tried to look at what is happening but I cannot figure it out.

DaPhil commented 6 years ago

Ok, I just saw that the FresnelWavefront class overwrites the imul method to use the apply_lens method if a QuadraticLens is about to be applied. This method alters the beam parameters in the way that it delivers the unexpected values. But this leads to the situation that the curvature I want to apply at the aperture plane is not the one I defined. I just want to apply a quadratic phase to a wavefront that has a flat phase.

In the meantime I checked the notebook for the Fresnel propagation. There, Example 2: Divergence of a Gaussian Beam near the Beam Waist and Adding Lenses fail when other units then meters are used.

DaPhil commented 6 years ago

Ok, QuadPhase is the thing I need to use to overcome this issue. I don't think that this is the intended behavior (since it is used in the notebook incorrectly)?

mperrin commented 6 years ago

The units issue definitely sounds like a bug.

As for applying the phase you want, you’re right there’s a subtle distinction between just applying a phase to a wavefront while leaving it “flat” in terms of the beam curvature, versus applying a phase that changes the curvature. I believe you can have the effect you want by using the ThinLens class rather than QuadraticLens. But this is a little non-obvious and I wonder if there’s a better way to do this. @douglase what are your thoughts on this?

mperrin commented 6 years ago

Ah yes, you can use QuadPhase rather than QuadLens. We could clarify the distinction better in the docs. Sounds like that notebook may need an update too

mperrin commented 6 years ago

Examining the log clarifies a bit what is going on. Here's examples of the output log for the two cases of using QuadPhase and QuadraticLens. (Note, I added some additional log print statements equivalent to your 'print(self.z)' so there's some extra output in the following compared to the code on GitHub)

Using QuadPhase:

wf = poppy.FresnelWavefront(beam_radius=1.0*u.m,
                            wavelength=1.0*u.m,
                            units=u.m,
                            npix=128,
                            oversample=1)
wf *= poppy.QuadPhase(z=-100*u.cm, name='QuadPhase')
[  poppy] Skipping oversampling, oversample < 1 or already padded 
[  poppy] Oversampling > 2x suggested for reliable results.
[  poppy]  in init for QuadPhase, z=-100.0 cm, _z_m = -1.0
[  poppy]  in get_phasor for QuadPhase, z=-100.0 cm, _z_m = -1.0
[  poppy] Applying spherical phase curvature =-1.00e+02 cm
[  poppy] Applying spherical lens phase =-1.00e-02 1 / cm
[  poppy] max_rsqd =2.00e+00
[  poppy]   Multiplied WF by phasor for Optic: QuadPhase

Using QuadraticLens:

wf = poppy.FresnelWavefront(beam_radius=1.0*u.m,
                            wavelength=1.0*u.m,
                            units=u.m,
                            npix=128,
                            oversample=1)
wf *= poppy.QuadraticLens(f_lens=-100*u.cm, name='QuadraticLens')
[  poppy] Skipping oversampling, oversample < 1 or already padded 
[  poppy] Oversampling > 2x suggested for reliable results.
[  poppy]  in init for QuadraticLens, z=-100.0 cm, _z_m = -1.0
[  poppy] Initialized: QuadraticLens, fl =-1.00e+00 m
[  poppy] ------ Applying Lens: QuadraticLens ------
[  poppy]   Pre-Lens Beam Parameters: w_0:1.000e+00 m, z_w0=0.000e+00 m
z=0.000e+00 m, z_r=3.142e+00 m
[  poppy]   Beam radius at QuadraticLens =1.00e+00 m
[  poppy]  input flat wavefront and QuadraticLens has output beam curvature of =1.00e+00 m
[  poppy] QuadraticLens has a curvature of =1.00e+00 m
[  poppy] QuadraticLens has a curved output wavefront, with waist at -0.9080003316496248 m
[  poppy] Post Optic Parameters:w_0:3.033e-01 m, z_w0=-9.080e-01 m
z=0.000e+00 m, z_r=2.890e-01 m
[  poppy] Set output beam focal length to -1.0 m
[  poppy] Inside Rayleigh distance to Outside Rayleigh distance.
[  poppy]  in init for QuadraticLens, z=-9.869604401089358 m, _z_m = -9.869604401089358
[  poppy]  in get_phasor for QuadraticLens, z=-9.869604401089358 m, _z_m = -9.869604401089358
[  poppy] Applying spherical phase curvature =-9.87e+00 m
[  poppy] Applying spherical lens phase =-1.01e-01 1 / m
[  poppy] max_rsqd =2.00e+00
[  poppy]   Multiplied WF by phasor for Optic: QuadraticLens
[  poppy] ------ Optic: QuadraticLens applied ------
douglase commented 6 years ago

The notebook example is a little over complicated, since it includes two lenses, but it illustrates how you add lenses in a way that updates the pilot gaussian beam parameters and seems to run OK.

Without those updates, the propagation algorithm will break down not knowing the distance to propagate to the next beam waist.

It might help to add a cartoon ray trace of the system.

It's confusing that z has multiple meanings (see https://github.com/mperrin/poppy/blob/master/poppy/fresnel.py#L54 where it was suggested it be renamed focal length). I should dig up the reference to blame it on. But in that context, the example at the top is correct, z=-10.

It makes sense that the applied radius of curvature in Marshall's example is not exactly 1m since the beam has some divergence without the lens. But, I am confused the difference between the debug statements in Marshall's example and the quoted number in the first comment, 9.869604401089358 m vs 986960.4401010844 m. they are very close to 1e5 off from each other.

@DaPhil, can you confirm the units? @mperrin can you push your more descriptive debug version? I always like having more debug statements.

DaPhil commented 6 years ago

The notebook example is a little over complicated, since it includes two lenses, but it illustrates how you add lenses in a way that updates the pilot gaussian beam parameters and seems to run OK.

The notebook fails e.g. in Example 2: Divergence of a Gaussian Beam near the Beam Waist when using wf0 = poppy.FresnelWavefront(beam_radius=1*u.m, wavelength=1*u.um) instead of wf0 = poppy.FresnelWavefront(beam_radius=1*u.m, wavelength=1e-6). If you check the Rayleigh length you see why. It has units m^2/mu, so the units of the wavelength has not been set to m. Doing so in the z_r method indeed fixes this issue. The Adding Lenses section also fails to reproduce the results when using 1*u.um instead of 1e-6 for the wavelength and I am not yet sure how to fix it but I guess another method where the wavelength needs to be scaled to m. I am not sure why you say the notebook seems to run OK.

It's confusing that z has multiple meanings (see https://github.com/mperrin/poppy/blob/master/poppy/fresnel.py#L54 where it was suggested it be renamed focal length). I should dig up the reference to blame it on. But in that context, the example at the top is correct, z=-10.

There is a typo in my very first statement. it yields +10 m when I define poppy.QuadraticLens(-10*u.m). Is this still correct?

It makes sense that the applied radius of curvature in Marshall's example is not exactly 1m since the beam has some divergence without the lens. But, I am confused the difference between the debug statements in Marshall's example and the quoted number in the first comment, 9.869604401089358 m vs 986960.4401010844 m. they are very close to 1e5 off from each other.

The ratio is due to different values used by me and @mperrin: 1mu vs 1m for the wavelength and 10m vs 1m for the curvature, so the get 1e6 from the wavelength and lose 1e1 from the curvature.

I encountered this issue in the following code (originally comparing poppy to proper):

# -*- coding: utf-8 -*-

import poppy
import astropy.units as u
import matplotlib.pyplot as plt
import numpy as np

w0 = 14.15*u.cm            # beam radius
L = 2.0*u.km               # propagation distance (m)
w_extend = 10              # weight of the spatial extend
wavelength = 10.6*u.um     # wavelength in vacuum (m)
R0 = L                     # initial radius of curvature
npix = 128                 # number of pixels
oversample = 1             # oversampling factor
nz = 10                    # number of phase screens
dz = L/nz                  # propagation step

# poppy propagation
wf = poppy.FresnelWavefront(beam_radius=w_extend*w0/2.0,
                            wavelength=wavelength,
                            units=u.m,
                            npix=npix,
                            oversample=oversample,
                            rayleigh_factor=2.0)
wf *= poppy.CircularAperture(radius=w0)
wf *= poppy.QuadraticLens(R0) # This is what I thought I have to use
#wf *= poppy.QuadPhase(-R0) # This is what needs to be used

num_levels = 8
fig = plt.figure(101)
fig.clf()

y, x = wf.coordinates()
z = wf.amplitude.T
z = z/z.max()
levels = np.linspace(np.min(z), np.max(z), num_levels)
ax = fig.add_subplot(221)
cf = ax.contourf(x, y, z, levels=levels)
ax.set_aspect('equal')
ax.set_title(wf.z)
ax.set_xlabel('x (m)')
ax.set_ylabel('y (m)')
plt.colorbar(cf)

z = wf.phase
levels = np.linspace(np.min(z), np.max(z), num_levels)
ax = fig.add_subplot(223)
cf = ax.contourf(x, y, z, levels=levels)
ax.set_aspect('equal')
ax.set_title(wf.z)
ax.set_xlabel('x (m)')
ax.set_ylabel('y (m)')
plt.colorbar(cf)

print(wf.pixelscale)
for l in range(nz+1):
    if l==0 or l==nz:
        delta = dz/2
    else:
        delta = dz
    wf.propagate_fresnel(delta)
    print(wf.pixelscale)

    fig = plt.figure(102+l)
    fig.clf()

    y, x = wf.coordinates()
    z = wf.amplitude
    z = z/z.max()
    levels = np.linspace(np.min(z), np.max(z), num_levels)
    ax = fig.add_subplot(221)
    cf = ax.contourf(x, y, z, levels=levels)
    ax.set_aspect('equal')
    ax.set_title(wf.z)
    ax.set_xlabel('x (m)')
    ax.set_ylabel('y (m)')
    plt.colorbar(cf)

    z = wf.phase
    levels = np.linspace(np.min(z), np.max(z), num_levels)
    ax = fig.add_subplot(223)
    cf = ax.contourf(x, y, z, levels=levels)
    ax.set_aspect('equal')
    ax.set_title(wf.z)
    ax.set_xlabel('x (m)')
    ax.set_ylabel('y (m)')
    plt.colorbar(cf)

You will see that at 1900 m, poppy produces antialiasing (while proper does not).

DaPhil commented 6 years ago

From the init of class Wavefront in poppy_core.py:

self.wavelength = wavelength
        """Wavelength in meters (or other unit if specified)"""
        if isinstance(self.wavelength, u.quantity.Quantity):
            self._wavelength_m = self.wavelength.to(u.m).value
        else:
            self._wavelength_m = self.wavelength

The wavelength is never scaled to meters. But it should? Or should one always use the variabel _wavelength_m instead?

douglase commented 6 years ago

yikes, it does look like something is up with that aliasing. I can reproduce it by propagating directly to 1900*u.m with this minimum working example taken from @DaPhil's post:


import poppy
import astropy.units as u
import matplotlib.pyplot as plt
import numpy as np

w0 = 14.15*u.cm            # beam radius
L = 2.0*u.km               # propagation distance (m)
w_extend = 10              # weight of the spatial extend
wavelength = 10.6*u.um     # wavelength in vacuum (m)
R0 = L                     # initial radius of curvature
npix = 256                 # number of pixels
oversample = 1             # oversampling factor
nz = 10                    # number of phase screens
dz = L/nz                  # propagation step

# poppy propagation
wf = poppy.FresnelWavefront(beam_radius=w_extend*w0/2.0,
                            wavelength=10.6e-6,
                            units=u.m,
                            npix=npix,
                            oversample=oversample,
                            rayleigh_factor=2.0)
wf *= poppy.CircularAperture(radius=w0)
wf *= poppy.QuadraticLens(R0) # This is what I thought I have to use
#wf *= poppy.QuadPhase(-R0) # This is what needs to be used
print(wf.param_str)

wf.propagate_fresnel(1900*u.m)
plt.imshow(wf.intensity)

image

douglase commented 6 years ago

@DaPhil on closer inspection, I'm not as sure this is a bug. I haven't been able to find any errors in the calculation and if I increase the oversampling to greater than unit, for example 10 in the image below, things look reasonable:

screen shot 2018-05-24 at 7 18 35 pm

Would you mind sharing you PROPER example you are comparing to? Particularly the beam_ratio, since that is 1/oversample.

douglase commented 6 years ago

(With respect to the note above about converting the units to meters, if the value has units of length, whether they are centimeters or parsecs, they will be converted to meters. If the user didn't specify the units, it's assumed they are meters (this is primarily to maintain backwards compatibility with earlier versions of POPPY which assumed meters everywhere). Do you see somewhere the code is not functioning this way?

DaPhil commented 6 years ago

on closer inspection, I'm not as sure this is a bug. I haven't been able to find any errors in the calculation and if I increase the oversampling to greater than unit, for example 10 in the image below, things look reasonable

Bug is certainly not the right word for it I think. I also noticed that I can make it work by increasing the oversampling and resolution. But first, this heavily increases the computational cost (and at the moment I only can use my 6 years old MacBook Pro for development). And second, proper delivers a different result with the same starting conditions.

Would you mind sharing you PROPER example you are comparing to? Particularly the beam_ratio, since that is 1/oversample.

Propers beam ratio is something different than poppy's oversample. The first determines the length of the grid size. I think this is what poppy's beam_radius does. In addition, the oversample fills zeros beyond that beam_radius variable. Did I understand the code correctly when I state this? Anyway, here are the files with the code that compares poppy and proper with an identical initial wavefront: Code.zip

(With respect to the note above about converting the units to meters, if the value has units of length, whether they are centimeters or parsecs, they will be converted to meters. If the user didn't specify the units, it's assumed they are meters (this is primarily to maintain backwards compatibility with earlier versions of POPPY which assumed meters everywhere). Do you see somewhere the code is not functioning this way?

Have you try to explicitly use units in your Fresnel notebook, example 2, e.g. replacing 1e-6 with 1*u.micron for the wavelength? It doesn't work. I think the wavelength is never scaled to m properly. In poppy_core.py, look at lines 127+:

        self.wavelength = wavelength
        """Wavelength in meters (or other unit if specified)"""
        if isinstance(self.wavelength, u.quantity.Quantity):
            self._wavelength_m = self.wavelength.to(u.m).value
        else:
            self._wavelength_m = self.wavelength

I think there should be a scaling of the wavelength property itself if it is an astropy.quantity, e.g. it should be

        self.wavelength = wavelength
        """Wavelength in meters (or other unit if specified)"""
        if isinstance(self.wavelength, u.quantity.Quantity):
            self._wavelength_m = self.wavelength.to(u.m).value
            self.wavelength = self.wavelength.to(u.m)
        else:
            self._wavelength_m = self.wavelength

With this change, your notebook runs fine even when units are used.

douglase commented 6 years ago

aliasing:

have you tried propagating the same array in both libraries? POPPY doesn't have smoothed apertures yet (see #129 and #180) so the circular function is slightly different.

units

the comparison in the notebook didn't do any unit conversion, so it isn't printing the right values. If I don't change poppy_core but add units and replace all the .values with their decomposed units the "Example 2: Divergence of a Gaussian Beam near the Beam Waist" it works:

# setup an initial Gaussian beam in an aperture. 

ap = poppy.CircularAperture(radius=1)
#CHANGE: make microns:
wf0 = poppy.FresnelWavefront(beam_radius=1*u.m, wavelength=1*u.um)

z_rayleigh = wf0.z_r.to(u.m)    # the Rayleigh distance

z = z_rayleigh * np.logspace(-1,1,num=11)

calc_rz = []
calc_wz = []
for zi in z:
    #setup entrance wave
    #CHANGE, microns:
    wf = poppy.FresnelWavefront(1*u.m, wavelength=1*u.micron)

    wf.propagate_fresnel(zi)

    # calculate the beam radius and curvature vs z
    #CHANGE: decompose into base units before taking value:
    calc_rz.append( (wf.r_c()/z_rayleigh).decompose().value) 
    calc_wz.append( (wf.spot_radius()/wf.w_0).decompose().value)

# Compare to the analytic solution for Gaussian beam propagation
zdzr = z/z_rayleigh
rz = (z**2 + z_rayleigh**2)/z
 #CHANGE: convert to meters before taking value:
wz = wf0.w_0*np.sqrt(1+zdzr**2)

plt.plot(zdzr, rz/z_rayleigh, label="$R_c(z)/z_R$ (analytical)", color='blue')
plt.plot(zdzr, calc_rz, ls='dashed', linewidth=3, color='purple', label="$R_c(z)/z_R$ (calc.)")
#CHANGE:make w_0 meters:
plt.plot(zdzr, wz/wf.w_0.to(u.m), label="$w(z)/w_0$ (analytical)", color='orange')
plt.plot(zdzr, calc_wz, ls='dashed', linewidth=3, color='red', label="$w(z)/w_0$ (calc.)")
plt.xlabel("$z/z_R$")
plt.legend(loc='lower right', frameon=False);

your suggestion is still a good one, since it simplifies the mental arithmetic if someone sees a length that is inches^2/millimeter. But maybe we should make the default unit configurable? @mperrin thoughts?

mperrin commented 6 years ago

Aliasing: Agreed re: doing tests with the same sampling. Note that the minimum recommended value of sampling is 2. (i.e. 2 points per lambda/D, so it's Nyquist sampled). Anything less than that is going to have potentially surprising results. Right now poppy issues a warning for sampling lower than that, in FresnelWavefront.__init__, but if you're not looking at the log messages you won't see that. We should probably promote that to a warnings.warn call instead of just log.warning, which I believe would make it show up in red text in a Jupyter calculation display.

Propers beam ratio is something different than poppy's oversample. The first determines the length of the grid size. I think this is what poppy's beam_radius does. In addition, the oversample fills zeros beyond that beam_radius variable. Did I understand the code correctly when I state this?

It turns out these are two slightly different ways of specifying the same things:

In other words these should be equivalent:

wf = poppy.FresnelWavefront(beam_radius=0.5*u.m, npix=1024, oversample=2, 
    wavelength=1*u.micron)
prop_begin, wf, 1.0, 1e-6, 2048, 0.5

Both of those should produce wavefront arrays which are 2048 total pixels across, but have the beam in only the middle 1024 pixels. The outer pixels are all zeros. The inner 1024 pixels will represent 1 meter in both cases, specified by diameter in PROPER and by radius in poppy.

Note that poppy's default oversample=2 is equivalent to proper's default beam_diam_fraction=0.5.

(Aside on antialiased apertures: For circular apertures, the sub pixel geometry calculations for gray pixels are actually working well in the geometry branch. There were some other issues with that branch to handle more complex shapes with gray pixel approximations; that's been on the back burner for a while but I expect it could pretty easily be polished up enough to merge in. But this is a small effect compared to getting the sampling right.)

Units: In general, the units framework is supposed to allow you do to calculations and get correct results without having to worry about unit conversions explicitly. It will all happen behind the scenes such that things are computed with any necessary conversions applied. That said, for display and output purposes you're right it makes sense to either cast to meters or decompose to base units (which do precisely the same thing, for lengths), just so you don't see displayed values in nonstandard composite units like inch^2/mm or mm^2/meter or so on.

I don't really think it's worth the effort to put in a configurable default unit. I don't see a driving need for being able to toggle arbitrarily - if everything's shown in meters, it's pretty easy to scale in your head when other units are needed. :-)