I think that the losses are being incorrectly recorded - in the train function in src.train the l1, l2 and mean errors are incorrectly logged
writer.add_scalar("Loss_l1/train_batch", loss_mean, epoch*len(train_dataloader) + i) writer.add_scalar("Loss_l2/train_batch", loss_mean_l1, epoch*len(train_dataloader) + i) writer.add_scalar("Loss/train_batch", loss_mean_l2, epoch*len(train_dataloader) + i)
Hi,
I think that the losses are being incorrectly recorded - in the train function in src.train the l1, l2 and mean errors are incorrectly logged
writer.add_scalar("Loss_l1/train_batch", loss_mean, epoch*len(train_dataloader) + i) writer.add_scalar("Loss_l2/train_batch", loss_mean_l1, epoch*len(train_dataloader) + i) writer.add_scalar("Loss/train_batch", loss_mean_l2, epoch*len(train_dataloader) + i)