Open emolter opened 1 year ago
Thanks for the info, Ned! I don't think the dependence on the emission angle data should be an issue -- these are essentially constants within the model, right?
The way the code got formatted makes it a little tricky to read. I could try reformatting it, but maybe it's better if you do it so I don't actually delete something I shouldn't. I think some well placed backticks might get the whole thing to render on the webpage better. Do you have a definition for the lddisk_model
function? I'm not sure I see it in the post.
yes, the array is a constant within the model - it changes slowly with time, but for a given ALMA dataset it's definitely not enough to matter. I sent you a function definition and usage via email because it's a bit long for the GitHub comment section
Hi Ian, Ryan, and Jeff,
This is a follow-up to our meeting today. We discussed the idea of adding parametric models of solar system bodies to operate in series with the rest of the non-parametric neural net. The goal (in my opinion) is to use the model as prior knowledge about the structure of the source, as well as to retrieve the (physically meaningful) parameters of the parametric model from the data.
I'm copy-pasting a very simple code to adjust the free parameters of a limb-darkened disk model, along with the code I used to make this into a PyTorch tensor at the same resolution as the data. You'll also need the file "uranus_mu_2021-09-30.npy", which is a 2-D array of emission angles on the Uranian disk at super-resolution and so is rather large (650 MB) - I have staged this for you on Google Drive for now. Is the dependence of the model on this array, or one like it, an issue? If so, we can discuss how to best implement this - for example, I could back up a step and send you the code I use to produce the mu values on a given pixel grid.
I am very happy to provide any additional information you might need, e.g., reasonable parameter ranges/priors, more complex models, or a restructure of this model into a different format.
I look forward to continuing to collaborate on this!
def adjust_lddisk(mu, a, flux, shift=None): ''' limb-darkened disk model to update each step (?)
import the model
modelfname = 'data/uranus_mu_2021-09-30.npy' #has pixel scale 1 mas theta0 = [0.21, 63.7, 0.0, 0.0] # [ld, flux, xshift, yshift] from 1-D optimizer of limb darkened model mu = np.load(modelfname)
zoom the model to desired resolution - note that zoom hates NaNs
zfactor = 0.001 / coords.cell_size mu[np.isnan(mu)] = 0.0 mu = ndimage.zoom(mu, zfactor) mu[mu < 1e-3] = np.nan #account for zoom hating NaNs by just setting very small mu back to NaN ldmodel = adjust_lddisk(mu, theta0[0], theta0[1], shift=(theta0[2], theta0[3])) pad_width = int((npix - ldmodel.shape[0])/2) ldmodel = np.pad(ldmodel, pad_width)
put ldmodel into starting model
ldmodel = np.fft.fftshift(ldmodel) # need this because base_cube gets automatically fftshifted ldmodel = np.expand_dims(ldmodel, 0) # basecube needs to be (nchans, nx, ny) startmodel = torch.tensor(ldmodel, dtype=torch.float64, device=device)