Closed dustinvtran closed 4 years ago
Heres is a good reference for this specific issue from within TensorFlow (with my insight attached):
self._apply_*
in layer.call
makes me think there should be a workaround that overrides that error until fixed (?)I think this may not be easy since TensorFlow uses a list
to keep track of the losses; adding to a list
each iteration of layer.call
will continue to grow the list
. IMO, a "hack" of the layer could suffice for this particular use-case (I'm most-likely wrong, untested logic here) is to:
layer.add_loss
call, look for executing eagerly ourselves (preventing the RuntimeError
)layer._losses
contains the appropriate loss tensor, and replace it.For our purposes, I think you can just call add_loss
during build
so they are only called one. The losses are KL regularizers on the weights, which means they are not input-dependent. (Well, it may no longer be tracked in the gradient tape after the first iteration.)
As you note though, we would still need to extend add_loss
to drop the error-raising for eager execution. @martinwicke @fchollet: Maybe that could be supported in the base Layer? (Happy to write that changelist.)
Closing this issue in favor of #630, since I believe some of the layers have been redesigned, but we still have problems with TF2 support.
TFP Layers use
self.add_loss
in order to accumulate regularizers on TensorFlow Distributions over the kernel and bias parameters (e.g., KL penalties for variational inference). This is not supported in Eager mode. When it is,self.add_loss
must be used carefully in two ways:build()
);self.losses
).Any thoughts on such a redesign is appreciated.