lyuwenyu / RT-DETR

[CVPR 2024] Official RT-DETR (RTDETR paddle pytorch), Real-Time DEtection TRansformer, DETRs Beat YOLOs on Real-time Object Detection. 🔥 🔥 🔥
Apache License 2.0
2.18k stars 236 forks source link

Why isn't the loss of validation on the same scale as the loss of training ? #355

Open Geo99pro opened 2 months ago

Geo99pro commented 2 months ago

Star RTDETR 请先在RTDETR主页点击star以支持本项目 Hi Sir @lyuwenyu, i am trully sorry if my question is low level but

I would like to understand something that is not clear to me. image

Even normalizing the validation dataset as you said, I still don't understand why the loss value is not of the same order as the train loss.

I mean for example that with an average train loss = 16.9771, I have an average val loss = 0.911. image

I don't understand why.

Here's the code I use (before taking the average val loss, I divide by the len(data_loader). image

Can you, please explain me if i'm doing something badly. And for sure i'm normalizind the validation set as the same way of the training set. Thanks

lyuwenyu commented 2 months ago

https://github.com/lyuwenyu/RT-DETR/blob/5b628eaa0a2fc25bdafec7e6148d5296b144af85/rtdetr_pytorch/src/solver/det_engine.py#L74-L83

I think you can use the same code as train, and use metric_logger to print loss

Geo99pro commented 2 months ago

Okay sir @lyuwenyu i'm going to try it as you said and see what gonna happen.

Thanks for your answer sir !