mperrin / poppy

Physical Optics Propagation in Python
BSD 3-Clause "New" or "Revised" License
175 stars 41 forks source link

Field amplitude does not scale #225

Closed DaPhil closed 7 years ago

DaPhil commented 7 years ago

Using Fresnel propagation, the electric field amplitude is not correctly scaled (not at all as it seems). I come to this conclusion after propagating a focused Gaussian and checking if the overall power is conserved, which it isn't. Check this script, which creates a Gaussian electric field with a certain power, applies a quadratic lens to focus it at 0.5m and propagates 3 times 0.1m. After every propagation step, I print the current power:

import numpy as np
import poppy
import astropy.units as u

num_xy = 128
oversample = 2

w0 = 1.0
lam = 1000.0e-9
P0 = 1.0e3 # Beam power

wf = poppy.FresnelWavefront(2.0*w0*u.m, wavelength = lam*u.m, npix = num_xy, oversample = oversample)
wf *= poppy.GaussianAperture(w = w0*u.m)
wf *= poppy.QuadraticLens(0.5*u.m)

# Adjust electric field amplitude so that power is P0
y, x = wf.coordinates()
x = y = x[0]
dx = dy = x[1]-x[0]
Int = wf.intensity
P = dx*dy*np.sum(Int) # Current power
wf *= np.sqrt(P0/P)
Int = wf.intensity
P = dx*dy*np.sum(Int)
print(P) # Correct result, 1kW

# Propagate and print current power (which should not change!!!)
wf.propagate_fresnel(0.1*u.m)
Int = wf.intensity
y, x = wf.coordinates()
x = y = x[0]
dx = dy = x[1]-x[0]
P = dx*dy*np.sum(Int)
print(P)

wf.propagate_fresnel(0.1*u.m)
Int = wf.intensity
y, x = wf.coordinates()
x = y = x[0]
dx = dy = x[1]-x[0]
P = dx*dy*np.sum(Int)
print(P)

wf.propagate_fresnel(0.1*u.m)
Int = wf.intensity
y, x = wf.coordinates()
x = y = x[0]
dx = dy = x[1]-x[0]
P = dx*dy*np.sum(Int)
print(P)

The result reads:

1000.0
640.0
360.0
160.0

However, the power should stay constant since no absorption is applied anywhere.

mperrin commented 7 years ago

Hi, I'm not sure why you think you need to multiply by the pixel area. The code normalization is set up to preserve the total summed intensity of the wavefront. Here's how I would do the above, simply looking at the summed intensity at each step:


import numpy as np
import poppy
import astropy.units as u

num_xy = 128
oversample = 2

w0 = 1.0
lam = 1000.0e-9
P0 = 1.0e3 # Beam power

wf = poppy.FresnelWavefront(2.0*w0*u.m, wavelength = lam*u.m, npix = num_xy, oversample = oversample)
wf *= poppy.GaussianAperture(w = w0*u.m)
wf *= poppy.QuadraticLens(0.5*u.m)

# Adjust electric field amplitude so that power is P0
P = wf.intensity.sum()
wf *= np.sqrt(P0/P)
P = wf.intensity.sum()
print(P) # Correct result, 1kW

# Propagate and print current power (which should not change!!!)
wf.propagate_fresnel(0.1*u.m)
P = wf.intensity.sum()
print(P)

wf.propagate_fresnel(0.1*u.m)
P = wf.intensity.sum()
print(P)

wf.propagate_fresnel(0.1*u.m)
P = wf.intensity.sum()
print(P)

The result is as expected:

1000.0
1000.0
1000.0
1000.0
DaPhil commented 7 years ago

Hi, The code normalization is set up to preserve the total summed intensity of the wavefront. Why? I don't see how this makes sense. What physical meaning has the total summed intensity? Why would it be preserved? It changes for example with the numerical resolution (double the number of grid points num_xy in my example and you get 4 x the value).

On the other hand, the power of the beam must be preserved when there are no losses due to scattering or absorption, which is a consequence of energy conservation. So: I'm not sure why you think you need to multiply by the pixel area. Because I calculate the power, which means integrating the intensity over both spatial axes: P=\int dx \int dy I(x,y). In terms of numerics, I have: P=\Delta x \Delta y \sum_{k,l} I(x_k, y_l). Therefore, I need to multiply the total intensity with the pixel area to obtain the power. This quantity is independent of the numerical resolution and must be preserved in the code I presented.

One can also check this by the analytical description of a Gaussian: integrate the suare of the electric field (e.g. calculate the power) and you end up with a constant (\pi w_0^2/2 assuming an initial field amplitude of 1). So the power is independent of the propagation length z!

Maybe some real life example as well. In the current implementation, the intensity amplitude stays constant for all z. However, it should increase, which is the reason to focus a laser beam for many applications, e.g. material processing. You increase the field strength via focusing.

After these arguments, do you still think the total intensity and not the power should be preserved while propagating?

mperrin commented 7 years ago

The normalization is motivated by digital imaging, specifically the measurement of PSFs on pixellated detectors (CCDs, CMOS, HgCdTe hybrid arrays, etc.). The count rate in a given pixel is an integral of the incident power over the area of the pixel. That's what we're trying to simulate, because that's the actual observable that one measures with a detector, in the lab or on a telescope.

It changes for example with the numerical resolution Yes, intentionally. Think of it this way: If you observe the same point spread function with two CCDs, and one of them has a pixel scale 2x smaller, then the detector with the smaller pixels will observe lower values in each pixel, on average by about 4x. In essence, the values in the arrays are already integrated over the spatial extent of the pixel area.

What physical meaning has the total summed intensity? Consider conservation of energy. If you observe a PSF with a detector, and you move the detector to different positions along the beam propagation, by conservation of energy the total summed counts measured on the detector should be the same regardless of position (assuming you can neglect noise terms such as photon shot noise, detector readout noise etc.)

After these arguments, do you still think the total intensity and not the power should be preserved while propagating? I see where you're coming from, and I think these are just different normalization choices about what the values are trying to represent. Different options, equally valid, with arguments towards either side. In any case as a pure practical matter, our team here at STScI is not going to prioritize any reworking of these conventions now. If you'd like, you're welcome to fork the code and develop an option to change the normalization to your desired convention and then submit a pull request, which we could consider. But the current code works for the purposes we developed it for.