Open brandondube opened 2 years ago
I think it makes sense for the following to be the set of interfaces, as the other machinery is allowed to interact with them:
# note that this is basically a prysm Wavefront
class Pupil:
N: int
dx: float
data: ndarray
class FPM:
N: int
dx: float
data: func(wvl: float) -> ndarray
class LyotStop:
N: int
dx: float
data: ndarray
class ThreePlaneSingleDMCoronagraph:
def __init__(self, pupil, dm, fpm, lyot_stop, img_specification_tbd):
pass
Then there can be constructors, such as
class Pupil:
@classmethod
def circle(cls, N, Dpup, Npup): pass
@classmethod
def HST(cls, N, Dpup, Npup): pass
@classmethod
def JWST(cls, N, Dpup, Npup): pass
the notable telescope apertures having D and Npup so that larger or smaller designs can be explored with some freedom. Users could clobber Pupil.dx
to change this, but it seems ugly.
Similarly, you have
class FPM:
@classmethod
def lyot(cls, N, lamD, r): pass
@classmethod
def vortex(cls, N, lamD, charge): pass
vortex doesn't need to know N or lamD, but this keeps the interface nearly the same, which is good for muscle memory.
And in Lyot you have
class LyotStop:
@classmethod
def circle(cls, N, Npup, od): pass
@classmethod
def annulus(cls, N, Npup, od, id): pass
I see the following pros:
And cons:
All of the stuff gets initialized only once, so repeat grid calculations seems not so problematic. The current usage is something like this:
data_root = Path('~/Downloads').expanduser()
lc_args = dict(
Nmodel=512,
Npup=300,
Nlyot = 300 * 0.8,
Nfpm=128,
Nimg=256,
Dpup = 30,
fpm_oversampling=8,
image_oversampling=8,
rFPM=2.7,
wvl0=.550,
fno=40,
data_root=data_root,
ifn_fn=DEFAULT_IFN_FN,
iwa=3.5,
owa=10,
start_az=-80,
end_az=80
)
l = LyotCoronagraphSingleDM(**lc_args)
And would change to:
wvl = .550
pu = Pupil.circle(N=512, D=30, Npup=300)
fpm = FPM.lyot(N=128, lamD=wvl/D, r=2.7)
ls = LyotStop.circle(N=512, Npup=300, od=0.8)
dm = DM(...)
img = TBDIMGSPEC(...)
l = ThreePlaneSingleDMCoronagraph(pu, dm, fpm, ls, img)
The translation exercise makes me realize that I missed the oversampling parameter for the FPM and the F/#. But on the whole, is this interface any better? It reduces the model of the coronagraph from a giant blob of everything to the assembly of legos, which is good.
Thoughts? @Jashcraf
I like this a lot - seems like an intuitive way to cut through the parameter wall. Though is there really much of a reason to differentiate a Pupil and Lyot stop?
Also I didn't get the con about sharing grids between constructors, could you elaborate?
On Mon, Aug 15, 2022 at 6:26 PM Brandon Dube @.***> wrote:
External Email
I think it makes sense for the following to be the set of interfaces, as the other machinery is allowed to interact with them:
note that this is basically a prysm Wavefrontclass Pupil:
N: int dx: float data: ndarray
class FPM: N: int dx: float data: func(wvl: float) -> ndarray class LyotStop: N: int dx: float data: ndarray class ThreePlaneSingleDMCoronagraph: def init(self, pupil, dm, fpm, lyot_stop, img_specification_tbd): pass
Then there can be constructors, such as
class Pupil: @classmethod def circle(cls, N, Dpup, Npup): pass
@classmethod def HST(cls, N, Dpup, Npup): pass @classmethod def JWST(cls, N, Dpup, Npup): pass
the notable telescope apertures having D and Npup so that larger or smaller designs can be explored with some freedom. Users could clobber Pupil.dx to change this, but it seems ugly.
Similarly, you have
class FPM: @classmethod def lyot(cls, N, lamD, r): pass
@classmethod def vortex(cls, N, lamD, charge): pass
vortex doesn't need to know N or lamD, but this keeps the interface nearly the same, which is good for muscle memory.
And in Lyot you have
class LyotStop: @classmethod def circle(cls, N, Npup, od): pass
@classmethod def annulus(cls, N, Npup, od, id): pass
I see the following pros:
- We get namespacing to box things in various class factories
- Everything is parametric at initialization (good for human) but data at runtime (good for computer) (more?)
And cons:
- Some duplication between Lyot stop and Pupil
- A lot of this will end up a thin veneer over prysm, sharing grids between constructors will become Not A Thing. unless dygdug cargo cults the old gridcache from prysm into itself (please no)
All of the stuff gets initialized only once, so repeat grid calculations seems not so problematic. The current usage is something like this:
data_root = Path('~/Downloads').expanduser()lc_args = dict( Nmodel=512, Npup=300, Nlyot = 300 * 0.8, Nfpm=128, Nimg=256, Dpup = 30, fpm_oversampling=8, image_oversampling=8, rFPM=2.7, wvl0=.550, fno=40, data_root=data_root, ifn_fn=DEFAULT_IFN_FN, iwa=3.5, owa=10, start_az=-80, end_az=80 )l = LyotCoronagraphSingleDM(**lc_args)And would change to:
The translation exercise makes me realize that I missed the oversampling parameter for the FPM and the F/#. But on the whole, is this interface any better? It reduces the model of the coronagraph from a giant blob of everything to the assembly of legos, which is good. Thoughts? @Jashcraf <https://github.com/Jashcraf> — Reply to this email directly, view it on GitHub <https://github.com/brandondube/dygdug/issues/2#issuecomment-1216129351>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AGC7XBA6NLXYW6FYGWOIJPDVZMJ75ANCNFSM56PBBBEA> . You are receiving this because you were mentioned.Message ID: ***@***.***>
Sharing grids is just that, for example, you can do
from prysm import coordinates, geometry
x, y = coordinates.make_xy_grid(N, dx=1)
r, t = coordinates.cart_to_polar(x,y)
pu_data = geometry.circle(Npup, r)
ls_data = geometry.circle(Npup*od, r)
so x,y,r,t are only computed once. The computed only ones matters somewhat for speed (a model with a giant thunk to "boot up" can be Not Fun to work with). But if the types keep x/y/r/t around for their own purposes then you have 5x(NxN)
storage for each (80% is overhead) and duplicating those between the two hurts quite a bit. E.g., at 16K arrays in double precision you have 2 gigabytes per array, so to store 10 of them is ~20GB, which doesn't fit on a laptop with 16GB of memory. But if you share the XYRT between the two of them, then it's six of them, which just barely fits on a 16 GB laptop.
That is hypothetical "if" they store them though. LOWFSSim stores them and I have literally never touched them ever.
I'm remembering, too, that we need a way to specify disturbances. At risk of inducing vomit with complexity, I kind of like overloading __add__
to add a phase screen, or add N phase screens.
pu =. Pupil.circle(...)
zernbasis = np.random.rand(11,N,N)
zerncoefs = np.random.rand(11)
cmplx_pu = pu + polynomials.sum_of_2d_modes(zerncoefs, zernbasis)
the implementation of add would do pu.data * exp(1j*...)
Well, this concept is actually a little half baked; the wavelength is needed to convert nanometers of OPD to waves/radians, and this realization happens on a per-wavelength basis, so you can't really modify the pupil for this, since the sketch above only has the FPM as a chromatic element.
Maybe it looks like adding an optional OPDError
interface to the model, ThreePlaneSingleDMCoronagraph(.., opde=ZernikeExpander)
I guess then you can make the contract that zernike expander must produce nanometers without knowing wavelength, and the coronagraph model takes care of the scaling.
Standard "idioms" for models