Closed darthoctopus closed 2 months ago
Hmm..yeah, I'm not sure how to get around this in a neat way.
The plan is to entirely replace the KDE/Asy_peakbag setup with the dimensionality reduction. So I'm a bit apprehensive about giving any statements about what the best method would be just yet.
I have been thinking of allowing input from any observational inputs to the parameters in the form of a dictionary with keys corresponding to any parameters in the prior_data.csv file. This could be used to help the odd case where epsilon might be off for example.
However, we could also use this to redefine the `developer mode' so that the priors are then exclusively defined in terms of these inputs. This would forego the dimensionality reduction which would slow things down probably, but it would allow the user a greater degree of control of the parameters.
I've started working on this in the dev branch.
I should have set things up now so that you can provide a distribution class instance with a callable ppf method which removes the corresponding parameter from the list of PCA pararameters.
Everything is being put together in the modeID module, and it seems to be running fairly well, but it hasn't been intergrated into the star class yet.
I'll write up a notebook with a(some) working example(s)
Done with PR #276
I'm trying to peakbag a K dwarf with TESS 20-second data, which is cooler than all of the stars in the existing prior training set. While
developer_mode
in principle permits me to force a particular value of Δν and νmax, the influence of (e.g.) T_eff from the KDE prior is so large that the sampler in the asymptotic peakbagging procedure only considers very low (rather than the provided very high) values of νmax and Δν.