@abhi0395 this is an update to your best_fit_model branch (PR #283) to address some of my maintainability concerns. I've made enough changes that I'm doing this as a PR to your branch rather than updating your branch directly. Although this version is quite a bit different from your original branch, I learned a lot from your work about the various corner cases with MPI vs. multiprocessing, wavehashes out of order compared to camera bands, etc. so thanks for that initial solution which guided me for what to watch out for.
In the end the output files are the same except
I renamed "COEFFTYPE" to "FITMETHOD" in anticipation of issue #274 splitting archetype coefficients into a separate column from template coefficients, and I also promoted that into being written out as part of the regular redrock REDSHIFTS table, not just in the model file.
for archetypes with nearest neighbors, I joined the subtypes with a semicolon instead of an underscore so that e.g. SUBTYPE='ELG_23;ELG_40' instead of SUBTYPE='ELG_23_ELG_40'. That makes it easier to split the SUBTYPE string into the individual archetype subtypes.
Big Caveat: the evaluated Archetype+legendre models are not correct, which I think is due to issue #291 with the inconsistent definitions of legendre basis wavelengths and whether they span the entire brz wavelength range or only the individual cameras. Let's get that sorted out separately, and then revisit this model evaluation to make sure it uses the same pieces. We don't need this for Jura, but we will need it for Archetype runs post-Jura.
More details
The goal here is to have Template.eval() and Archetypes.eval() "own" the concept of how to evaluate a model given coefficients and have rrdesi -> DistTargets.eval_models -> Target.eval_model use those. This simplifies the bookkeeping to avoid things like
wavedict, wave_dict, and dwave all being dictionaries of wavelength grids differing only by their keys
Archtype.get_best_archetype_model having to know about templates just in case it is asked to evaluate something that isn't an archtype.
New functions get_best_model_spectra, eval_model_for_one_spectra, and eval_tdata having significant conceptual overlap with previously existing eval functions, but not actually using those. Instead of pulled pieces of those into the the other eval functions.
I also updated some of the underlying messiness that made your original branch hard to implement:
wavehash vs. camera bands: For DESI, all spectra of a given band have the same wavelengths so DistTargetsDESI can just use the band as the wavehash.
wavehash out of order compared to camera bands (this was due to a set operation changing the order).
Please take a look at this, and then let's discuss any items that you disagree with and/or have a better idea for how to implement. Thanks.
coverage: 37.789% (+0.8%) from 36.952%
when pulling c006d4fc0635999b5cf3859a9b96517fa78dfc54 on best_fit_model_refactor
into 6326df1d2300a27bb0df7bf1b65b9c13b7182236 on best_fit_model.
@abhi0395 this is an update to your best_fit_model branch (PR #283) to address some of my maintainability concerns. I've made enough changes that I'm doing this as a PR to your branch rather than updating your branch directly. Although this version is quite a bit different from your original branch, I learned a lot from your work about the various corner cases with MPI vs. multiprocessing, wavehashes out of order compared to camera bands, etc. so thanks for that initial solution which guided me for what to watch out for.
In the end the output files are the same except
Big Caveat: the evaluated Archetype+legendre models are not correct, which I think is due to issue #291 with the inconsistent definitions of legendre basis wavelengths and whether they span the entire brz wavelength range or only the individual cameras. Let's get that sorted out separately, and then revisit this model evaluation to make sure it uses the same pieces. We don't need this for Jura, but we will need it for Archetype runs post-Jura.
More details
The goal here is to have
Template.eval()
andArchetypes.eval()
"own" the concept of how to evaluate a model given coefficients and haverrdesi -> DistTargets.eval_models -> Target.eval_model
use those. This simplifies the bookkeeping to avoid things likeArchtype.get_best_archetype_model
having to know about templates just in case it is asked to evaluate something that isn't an archtype.get_best_model_spectra
,eval_model_for_one_spectra
, andeval_tdata
having significant conceptual overlap with previously existingeval
functions, but not actually using those. Instead of pulled pieces of those into the the other eval functions.I also updated some of the underlying messiness that made your original branch hard to implement:
set
operation changing the order).Please take a look at this, and then let's discuss any items that you disagree with and/or have a better idea for how to implement. Thanks.