Open somu15 opened 2 years ago
Is there documentation on what the input structure is for the function
log_posterior_fn
in line 114 of models.py? These are essentially the BNN params and I want to understand how these params are arranged to compute this function. For example, we can consider two hidden layers with 10 neurons each. Thank you in advance!
Thanks for the question!
The input to log_posterior_fn
is designed to meet the requirements of MCMC kernel in Tensorflow Probability (TFP), such that we can use TFP's sampling methods directly, e.g. HMC, adaptive HMC etc, in NeuralUQ. For that, I recommend you go through this TFP website for details and examples.
For BNN and the implementation in NeuralUQ, we design the input to be a list of weights and biases. For example, if one single BNN is considered, then the input to log_posterior_fn
would be a list, [w1, w2, ..., wn, b1, b2, ..., bn], where wi, bi, i=1,...,n are the weights and biases in the same order of neural network's layers, from the input layer to the output layer. If multiple BNNs are considered, then the input would also be a list, which is stacked/appended by multiple lists, each of which is from a unique BNN. In NeuralUQ, we design a method to track BNNs such that the list is able to be decomposed accordingly, to compute corresponding log likelihood and log prior, which is a totally different story.
The order of parameters in that list does not have to be this way, though. Changing the order requires rewriting the __call__
method of the surrogate and log_prob
method of the corresponding variable. For now, we don't recommend that.
Thanks again, for your question. This made me realize we do need a good documentation for this functionality. Let me know if you have any other questions.
Thank you for the response.
Is there documentation on what the input structure is for the function
log_posterior_fn
in line 114 of models.py? These are essentially the BNN params and I want to understand how these params are arranged to compute this function. For example, we can consider two hidden layers with 10 neurons each. Thank you in advance!