Open mgerbino opened 2 years ago
Is the full model and parameter set for MFLike written down somewhere? I could extract it from the code (and the notebook) I guess.
Fiducial model for the V0.6 release is LCDM, with params set according to this camb inifile: https://github.com/ACTCollaboration/actsims/blob/master/data/cosmo2017_10K_acc3_params.ini
Fiducial foreground model is set according to the following dict: fg_components = ["cibc", "cibp", "kSZ", "radio", "tSZ"] fg_norm = {"nu_0": 150.0, "ell_0": 3000, "T_CMB": 2.725} fg_params = {"a_tSZ": 3.30, "a_kSZ": 1.60, "a_p": 6.90, "beta_p": 2.08, "a_c": 4.90, "beta_c": 2.20, "n_CIBC": 1.20,"a_s": 3.10, "T_d": 9.60} i.e., no diffuse fg, TT-only fg with no tSZxCIB component. This amounts to setting "a_gtt": 0, "a_gte": 0, "a_gee": 0, "a_psee": 0, "a_pste": 0, "xi": 0.
No bandpass integration (dirac-delta bandpass), no systematics params. This amounts to setting:
band_integration: external_bandpass: False nsteps: 1 bandwidth: 0
"bandint_shift_93" : 0, "bandint_shift_145" : 0, "bandint_shift_225" : 0, "calT_93": 1, "calE_93": 1, "calT_145": 1, "calE_145": 1, "calT_225": 1, "calE_225": 1, "calG_all": 1, "alpha_93": 0, "alpha_145": 0, "alpha_225": 0
The particular example of Pk_interpolator
is something that's already available in the boltzmann Theory
objects. Is the idea of this issue to brainstorm a list of other theory calculations not covered by CAMB/CLASS?
Sure, I mean something different, apologies for not being clear enough. Pk_interpolator is often the raw ingredient needed by many other intermediate theory calculations. As an example, you need Pk_interpolator to compute the expected mass function in https://github.com/simonsobs/SOLikeT/blob/2874450452d77649340d8b2e47a09ff6b291695b/soliket/clusters/tinker.py#L129, or to compute cl_gg, cl_kg in https://github.com/simonsobs/SOLikeT/blob/1d0a7fc31f7ad8cff6deaf44808fe1ad94671b79/soliket/xcorr/limber.py#L32.
Since in principle the Pk_interpolator is the same for all these observables, I'd like to have it inizialized once for all with the same input settings (e.g., same kmax, same transfer, same redshift range, all wide enough to encompass all the tracers etc). In other words, I don't want clusters, xcorr and other likelihoods to get different requirements from the Boltzmann code for the very same object. Does it make sense to you?
Cobaya should already combine requirements to make required spanning k ranges range for all calculations and do one computation. The interpolator object should also be cached so each theory will receive the same interpolator object when called with the same field arguments. Could be a lot slower if you force the krange to be fixed at the maximum required by any likelihood (though it would make results more stable between using different likelihood combinations - however changes with combination a general Cobaya "feature", and ranges should of course be converged anyway).
Identify theory predictions that are shared by many likelihoods (e.g., Pk interpolator needed by clusters and xcorr) and compute them once for all the likelihoods.