Closed Char-Aznable closed 5 years ago
The minimize
function expects a 0-argument callable argument in its first position, but tfp.vi.monte_carlo_variational_loss
returns a Tensor (the output of the loss calculation). I think you need to pass
lambda: tfp.vi.monte_carlo_variational_loss(...)
instead. This will be a function (by way of closure) of the trainable variables in your model, and will re-sample from the variational model on each call in the optimizer loop.
Note that tfp.vi.fit_surrogate_posterior
exists as lightweight sugar
implementing the solution Chris described.
Dave
On Mon, Oct 28, 2019 at 12:54 PM Christopher Suter notifications@github.com wrote:
The minimize function expects a 0-argument callable argument in its first position, but tfp.vi.monte_carlo_variational_loss returns a Tensor (the output of the loss calculation). I think you need to pass
lambda: tfp.vi.monte_carlo_variational_loss(...)
instead. This will be a function (by way of closure) of the trainable variables in your model, and will re-sample from the variational model on each call in the optimizer loop.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/tensorflow/probability/issues/628?email_source=notifications&email_token=AAHSFCVK2RPXESE274ODFHDQQ47QHA5CNFSM4JF7EER2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECOFY2Y#issuecomment-547118187, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHSFCTISRBQPZA77QX7D4LQQ47QHANCNFSM4JF7EERQ .
@csuter @davmre Thanks for the information! I manage to get it to run by passing lambda: tfp.vi.monte_carlo_variational_loss(...)
to the minimize() call. But I observe a 100x slow down compared to the tensorflow-1.0 compatible version. I open a new issue #629 and I'm closing this.
Hi, I want to train a normalizing flow using Adam. My model looks like this:
This gives the error:
I used to be able to get similar training op defined in tensorflow 1.0 tf.compat.v1.train.AdamOptimizer. It seems tensorflow 2.0's adam optimize require explicitly specifying trainable variables and I don't know how to get the trainable variables from tensorflow probability. Both
flow
anddist
have emptytrainable_variables
: