First of all, I'd like to extend my appreciation for the work put into this project.
I've been exploring the code related to training the autoencoder model, specifically within the "train.py" file. I came across an inconsistency in the calculation of loss between the training and evaluation phases.
During training, the loss is defined as follows: *loss = l2loss + cosloss 0.001. However, during evaluation, the loss seems to be calculated slightly differently: loss = l2_loss(outputs, data) + cos_loss(outputs, data), where the cos_loss term is not multiplied by 0.001**.
I'm curious to understand whether this difference is intentional or if it might be an oversight. If intentional, I'd appreciate some insight into the rationale behind this choice.
Hello!
First of all, I'd like to extend my appreciation for the work put into this project.
I've been exploring the code related to training the autoencoder model, specifically within the "train.py" file. I came across an inconsistency in the calculation of loss between the training and evaluation phases.
During training, the loss is defined as follows: *loss = l2loss + cosloss 0.001. However, during evaluation, the loss seems to be calculated slightly differently: loss = l2_loss(outputs, data) + cos_loss(outputs, data), where the cos_loss term is not multiplied by 0.001**.
I'm curious to understand whether this difference is intentional or if it might be an oversight. If intentional, I'd appreciate some insight into the rationale behind this choice.
Thanks!