tensorflow / probability

Probabilistic reasoning and statistical analysis in TensorFlow
https://www.tensorflow.org/probability/
Apache License 2.0
4.27k stars 1.11k forks source link

Support Eager mode in TFP Layers #127

Closed dustinvtran closed 4 years ago

dustinvtran commented 6 years ago

TFP Layers use self.add_loss in order to accumulate regularizers on TensorFlow Distributions over the kernel and bias parameters (e.g., KL penalties for variational inference). This is not supported in Eager mode. When it is, self.add_loss must be used carefully in two ways:

  1. ensure that its tensor computation is performed only once during a forward pass and effectively cached (e.g., via build());
  2. avoid writing such losses to global collections as a side-effect; its losses should be grabbed from the layer and not via tf.GraphKeys (e.g., write to self.losses).

Any thoughts on such a redesign is appreciated.

wenkesj commented 6 years ago

Heres is a good reference for this specific issue from within TensorFlow (with my insight attached):

I think this may not be easy since TensorFlow uses a list to keep track of the losses; adding to a list each iteration of layer.call will continue to grow the list. IMO, a "hack" of the layer could suffice for this particular use-case (I'm most-likely wrong, untested logic here) is to:

dustinvtran commented 6 years ago

For our purposes, I think you can just call add_loss during build so they are only called one. The losses are KL regularizers on the weights, which means they are not input-dependent. (Well, it may no longer be tracked in the gradient tape after the first iteration.)

As you note though, we would still need to extend add_loss to drop the error-raising for eager execution. @martinwicke @fchollet: Maybe that could be supported in the base Layer? (Happy to write that changelist.)

srvasude commented 4 years ago

Closing this issue in favor of #630, since I believe some of the layers have been redesigned, but we still have problems with TF2 support.