Open Geo99pro opened 1 month ago
Yes, You are right. We did not normalize bbox in val_dataloader
. You can do it by adding L20
after L35
https://github.com/lyuwenyu/RT-DETR/blob/main/rtdetr_pytorch/configs/rtdetr/include/dataloader.yml#L20 https://github.com/lyuwenyu/RT-DETR/blob/main/rtdetr_pytorch/configs/rtdetr/include/dataloader.yml#L35
Oh yes ! Of course then it can perform L131 of the function _transform
in https://github.com/lyuwenyu/RT-DETR/blob/64878acad2f58ed34579e5a5ec45da1044587e09/rtdetr_pytorch/src/data/transforms.py.
I appreciate your answer @lyuwenyu, thanks you.
Star RTDETR 请先在RTDETR主页点击star以支持本项目 Star RTDETR to help more people discover this project.
Describe the bug Hello everyone,
First of all, I'd like to congratulate you on the work you've done on this project.
I'm in the process of adding early stop to the code. To do this, I need to calculate the validation error first. I've managed to calculate the validation error in the
evaluate function
available inrtdetr_pytorch/src/solver/det_engine.py
using criterion(outputs, targets) and summing the result.However, I noticed that the validation data used by the model during the
evaluation
are not normalized as in the training mode. This leads to very high validation error values.I wonder if this lack of normalization is intentional and if there's any particular reason for it on your part?
NB: the bbox values for the train function are indeed normalized, whereas the validation values are not, as shown in the images below.``
Thanks in advance for your help. And I apologize if my question is low level, I'm learning.
To Reproduce