Closed drvdputt closed 2 years ago
I have added the model trimming tool mentioned above to helpers.py
, in another branch. There are some more changes to parse_table
too, since it crashed if there were 0 Gaussians or Drudes in the table.
Should I push those changes too?
I have added the model trimming tool mentioned above to
helpers.py
, in another branch. There are some more changes toparse_table
too, since it crashed if there were 0 Gaussians or Drudes in the table.Should I push those changes too?
Yes please.
By trying out my new tool, I found some ways in which PAHFIT can crash, for certain science pack configurations.
The main failure mode is when there are zero entries of a certain feature type in the pack file, e.g. no Drude1D, no Gaussian1D. This also happens when there are e.g. no dust features in the given wavelength range, and the trimming tool consequently removes all Drude1D.
Is this a use case we want to support? The crash happens at model construction, and fixing it should be straightforward at least for that step. But of course, there might be some other hiccups in other parts of the program (e.g. feature strengths, plotting...).
Writing a set of tests for this would be straightforward I think. Just create some science packs where all the features of one type are removed.
@drvdputt my MBB PR #176 should fix the crashing behavior you observed. In terms of this PR, I like the concept of on-the-fly adjustments (which will be necessary for our active tuning during on JWST spectra). But given our joint interest in moving to a more flexible & composable "pack" setup (instrument (PAHFit.packs.instrument class) + science (.yaml files)), perhaps we should discuss these changes in that context? We can chat about it Monday.
I have written a test (see pahfit/tests/test_model_trimming
) which runs the construction of PAHFITBase (parse_table()
and __init__()
) with different model components removed from the table. This covers: No blackbodies, no Gaussians, No Drudes, and No Drude attenuation.
Then, I fixed the crashes that this caused. #176 already added safeguards against param_info[i] is None
in __init__()
. But they did not work yet, because param_info was actually never set to None. Therefore, parse_table will now set the components of the param_info tuple to None when applicable. Following this addition I added similar safeguards to estimate_init
.
The has stopped the crashing, at least for the construction of PAHFITBase. We can address any other issues down the line when people (me) actually start using the models with Blackbody/Gaussian/Drude removed.
So this is close to mergeable.
I have written a test (see
pahfit/tests/test_model_trimming
) which runs the construction of PAHFITBase (parse_table()
and__init__()
) with different model components removed from the table. This covers: No blackbodies, no Gaussians, No Drudes, and No Drude attenuation.
Probably fine to merge now, but just be aware some details of the features table will be changing slightly, so the tests which target it will need a few updates. Hopefully nothing too drastic.
[Edit] BTW, here's the nomenclature I'm suggesting we adopt. So the new table-thingy will be a feature table, and each feature (like say a [NeII] line) will have various parameters (like wavelength, fwhm, power), and each parameter will have attributes (like value, bounds).
Ok, I would like to get this merged today. I will rebase or merge to integrate the latest changes. We can then adjust this code to the new YAML science packs whenever that system is put to use.
Going to merge this now, on the notion that @drvdputt will help @Ameek-Sidhu with the connecting in of YAML-based Features table, and moving, fit/read/etc. from helpers into the model object.
The part that creates
param_info
(originally part ofread()
) has been split off into a new function calledparse_table()
.Functionality of
read()
has not changed, so nothing should break by doing this.Main use case for
parse_table()
is adjusting science packs on-the-fly in Python:param_info
usingparse_table()
param_info
to PAHFITBaseI already have a tool which makes use of this, to create "trimmed" PAHFITBase models based the wavelength range of the input data. It could be added to PAHFIT later, so that science packs do not have to be adjusted when only one spectral segment needs to be fit. Will also be relevant for #158.
Other small changes: read() and plot() are now static methods.