minghanqin / LangSplat

Official implementation of the paper "LangSplat: 3D Language Gaussian Splatting" [CVPR2024 Highlight]
https://langsplat.github.io/
Other
674 stars 72 forks source link

Inconsistency in Loss Calculation between Training and Evaluation of Autoencoder Model #16

Open MichelRosselli opened 9 months ago

MichelRosselli commented 9 months ago

Hello!

First of all, I'd like to extend my appreciation for the work put into this project.

I've been exploring the code related to training the autoencoder model, specifically within the "train.py" file. I came across an inconsistency in the calculation of loss between the training and evaluation phases.

During training, the loss is defined as follows: *loss = l2loss + cosloss 0.001. However, during evaluation, the loss seems to be calculated slightly differently: loss = l2_loss(outputs, data) + cos_loss(outputs, data), where the cos_loss term is not multiplied by 0.001**.

I'm curious to understand whether this difference is intentional or if it might be an oversight. If intentional, I'd appreciate some insight into the rationale behind this choice.

Thanks!