Closed www3cam closed 3 years ago
Also, I may be able to add RealNVP as a bijector instead of autoregressive flow. I don't like the autoregressive property for non-time series data (but happy to be corrected if this is not a problem), so I want to implement this with Real NVP.
Hi Cameron!
Regarding RealNVP: Most density estimators used in sbi
are using the nflows
library, e.g. build_maf
in https://github.com/mackelab/sbi/blob/main/sbi/neural_nets/flow.py#L75. nflows
contains sufficient building blocks for constructing RealNVP, see here for example: https://github.com/bayesiains/nflows/blob/master/nflows/flows/realnvp.py. You could thus write a build_realnvp
function and pass it any algorithm as explained in the flexible interface tutorial. A PR with that function would be welcome!
Regarding frequentist estimation instead of Bayesian inference: In principle, I think we can definitely make those approaches part of sbi
as well. However, this would be a more sophisticated PR. To begin with, the question is which algorithm to implement. The recent review paper of Cranmer, Brehmer, and Louppe (2019) includes discussion on a number of approaches (e.g., using calibrated classifiers) and is a great place to start. A recent approach that came out after this review paper is Confidence Sets and Hypothesis Testing in a Likelihood-Free Inference Setting.
Is there any ability to perform hill climbing operations on the parameters of the simulator? I’m thinking of an iterative process (either Variational Bayes or MLE/VI) where you condition on the parameters and the x values, train the density/ratio estimator, then take gradient steps of the density estimator wrt the parameters of the real model. I feel like this would be involved and I’ve run some tests doing this with my own model, but haven’t really gotten the model to work well. Does this make sense?
Optimisation on, say, a trained neural network by SNLE_A
with respect to parameters is possible in principle. You'd get back a MAP estimate. However, if you are only interested in frequentist estimators, other algorithms might be more efficient; I’d recommend taking a close look at the references above.
Hi all... I think that it would be interesting to get the likelihood ratio estimators (like CARL) into SBI, which would provide ability to do MLE or frequentist confidence intervals. Actually, the SNRE is closely related if the network is learning NN(x; θ) ~= p(x|θ)/p(x)
then it is proportional to the likelihood and can be used for MLE θ̂ = argmax_θ NN(x; θ)
directly and can be used for intervals (detailed approach will depend on how the intervals are made (asymptotic or Neyman Construction). Eg. the likelihood ratio p(x|θ)/p(x|θ̂)
can be estimated via NN(x; θ)/NN(x; θ̂)
Hi Kyle,
Big fan of your work. I still need to read your review paper and the other work that Jan-Matthias suggested. I’ve been busy with other things. CARL seems very interesting. Would you be willing to talk on email over this? Thanks,
Cameron
I'd like to revive this issue a bit in conjunction with the discussion in #306
For a novice user and novice Bayesian (like myself), it is great to see these tools (like sbi
) that produce a posterior I can sample from. For me the (statistical) information that could be leveraged for my dataset/simulation at hand with such a tool at hand feels like a big honey pot: expectation values, uncertainties (or confidence/credible intervals), covariances, you name it - all in those n-dimensions relevant to my problem.
So, it would be great to have utilities that could produce MAP estimates (see ideas in #306) or MLEs at the minimum. This way, users that come with their dataset/simulations can give it a shot and learn end-to-end what the method is capable of.
Less of an issue and more of a question, but is there any way to use simulation estimators to do MLE instead of a fully Bayesian approach? In particular, SNLE which shouldn't have serious implementation issues to modify to do this. I'm happy to do this, if necessary, but I've never really contributed to open source so I'm not exactly sure the process. Thanks,
Cameron