Closed stefanradev93 closed 4 months ago
Not sure if my setup matches what you are facing. However, for me adding the loss explicitly in network.compute_metrics()
solves this.
Proposed change within coupling_flow/couplings/all_in_one_coupling.py:
def compute_metrics(self, x, y, y_pred, **kwargs):
metrics = dict(loss=self.compute_loss(x, y, y_pred, **kwargs))
return metrics
with this amortizer class, calling amortizer.fit()
results in
Epoch 1/2
5/5 ━━━━━━━━━━━━━━━━━━━━ 1s 105ms/step - inference/loss: 2.9389
Epoch 2/2
5/5 ━━━━━━━━━━━━━━━━━━━━ 1s 104ms/step - inference/loss: 1.8689
Wouldn't this compute (unnecessarily so) the loss twice?
True, very much not ideal..
Just saw, that the total loss (summary + inference net) is stored by the Amortizer.compute_loss()
method currently in a keras.metrics.Mean
object called loss_tracker
.
Going by that, we could read out that total loss in Amortizer.compute_metrics()
with Amortizer.loss_tracker.result()
.
Or we could do the same thing one level below in all of the networks. What do we really want to have available as metrics?
Solved by simply calling base_metrics = super().compute_metrics(...)
and then merging our extra metrics with those.
The upcoming keras 3.0 amortizer does not show any losses in keras output and progress bar. This is urgently needed for the streamlined-backend branch.