Open pof-declaneaston opened 1 year ago
Yes, I agree, there should be created something like self.loss_tracker = keras.metrics.Mean(name="loss")
for each additional loss in __init__
of the Model and in train_step
and test_step
they should be updated by self.loss_tracker.update_state(loss)
When I call "fit" or "evaluate" on a tfrs.model.Model the loss values (total_loss, loss, and regularization_loss) returned are only based on just the last batch (or last batch of each epoch for "fit"). I believe this is because of the way train_step and test_step are implemented.
From what I understand the "metrics" values returned from these functions are meant to be metrics for the entire dataset (or epoch) up to and including this batch. The values here are just the last batch. The following code reproduces the issue.
I get the following as output