Closed jskowron closed 3 years ago
Hi @jskowron.
This is already somewhat supported as part of the transform function, with the derived_param_names feature. This means however that you need to move your computation into the transform function.
However, in practice, storing such blobs is not as useful in nested sampling as it is with MCMC, because most NS points receive zero weight. Therefore, it is usually practical to take the posterior samples after a run (a few thousand points), and compute the necessary information again.
Another possibility is to define a model function which returns all data products, and use joblib.Memory to memoize it. Calling it again afterwards with the posterior samples then has no additional cost.
log_likelihood function often requires extensive calculations. As a byproduct, some intermediate values are being calculated during the sampling process. These intermediate values often are of scientific interest. Since the output from the log_likelihood function consist of a single number, all of the intermediate values have to be discarded and recalculated after the sampling process, which could have comparable duration to the sampling itself.
It would be of great help, if every calculated point could also have some arbitrary vector returned from loglike function and stored beside its likelihood in the final sample.
This feature is analogous to the "blobs" feature in emcee package: https://emcee.readthedocs.io/en/stable/user/blobs/