hlsheng1 / RDIoU

"Rethinking IoU-based Optimization for Single-stage 3D Object Detection", ECCV2022 accept!
MIT License
128 stars 12 forks source link

Unfair comparison? #5

Open zen-d opened 2 years ago

zen-d commented 2 years ago

@hlsheng1 Thanks for your work, but I notice that the training epochs are not reported in the paper, and are much longer than standard ones (such as SECOND) in the config files. Could the comparison show the effectiveness of loss only without controlling the training budget?

hlsheng1 commented 2 years ago

Thanks for your interest in this work. I am sure that all the reported results adopt the same setting HaHaHa. If you doubt it, you can have a try. By the way, change the training epochs will not improve the performance of SECOND. You can also try it by yourself.

zen-d commented 2 years ago

Thanks for your rapid reply! "I am sure that all the reported results adopt the same setting", so I think there are two possibilities to explain the inconsistency: 1. The epochs in current config yaml files should be reduced a lot, while still maintaining the performance improvement. 2. You have trained all other counterparts with the longer epochs, such as 200 epochs for SECOND on KITTI. Out of the above two, I guess the first one is the correct understanding, am I right?

hlsheng1 commented 2 years ago

Hi, the ablation studies are conduct for only car detection. Pls look at the only car detection yaml, the default epoch is 100. I have trained all other counterparts with the longer epochs, (i.e., 100) in Table 4.

zen-d commented 2 years ago

Then how about Table 1? It includes 3 classes training. What are the training epochs of yours and the listed counterparts

hlsheng1 commented 2 years ago

Table 1 shows the results that officially released by different methods. To my best knowledge, every reported results adopts the different settings.