WoodwindHu / DTS

Code for "Density-Insensitive Unsupervised Domain Adaption on 3D Object Detection"
MIT License
32 stars 5 forks source link

Performance Difference #3

Open CBY-9527 opened 1 year ago

CBY-9527 commented 1 year ago

It's a good job. I am currently encountering some problems. Running the code you provided (nuscenes->KITTI), the model performance of source domain training is normal, but the self-training performance on the KITTI dataset is not good, which is very different from the performance in the paper. Could you please give me some possible suggestions?

performance of source domain (nuscenes) training: bev AP:87.3405, 72.4399, 70.4321 3d AP:64.7863, 50.5374, 48.3438

performance of targe domain (KITTI) self-training: bev AP:89.3165, 76.7564, 76.1534 3d AP:63.8874, 53.5953, 50.9495

WoodwindHu commented 1 year ago

Do you change parameters of self-training?

CBY-9527 commented 1 year ago

Do you change parameters of self-training?

I used the "secongiou-dts" method without modifying the configuration file. Later, I referred to the parameters in the configuration files of "pointpillars-dts" and "pvrcnn-dts" and made adjustments to the parameters of "secongiou-dts". However, the performance did not improve as expected and did not achieve the results reported in the paper.

WoodwindHu commented 1 year ago

What your self-training script? How many GPUs did you use in self-training process? The results reported in the paper are trained with only one GPU.

CBY-9527 commented 1 year ago

What your self-training script? How many GPUs did you use in self-training process? The results reported in the paper are trained with only one GPU.

I used 4 NVIDIA RTX 4090 GPUs for both pre-training and self-training, with a batch size of 16. I will attempt to perform pre-training and self-training on a single GPU, but I can only use a maximum batch size of 8 due to memory limitations. I would like to ask whether the pre-training and self-training use a single card with 16bz for training.

WoodwindHu commented 1 year ago

The default batch size is 4.

WoodwindHu commented 1 year ago

The default batch size is 4.

I think I found out what the problem is, I'll change the default batch size to 4 immediately.

CBY-9527 commented 1 year ago

The default batch size is 4.

I think I found out what the problem is, I'll change the default batch size to 4 immediately.

I have tried to train with only a single card with 4bz, and did not modify any parameters, but the performance of self-training is still not up to the results given in the paper. (nus-->kitti, secondiou)

performance of source domain (nuscenes) training: bev AP:87.1824, 76.9841, 74.8380 3d AP:64.3406, 53.6293, 50.4871

performance of targe domain (KITTI) self-training: bev AP:90.9202, 76.8056, 74.9281 3d AP:72.5631, 57.2304, 55.4015

WoodwindHu commented 1 year ago

I think the performance difference could be caused by code cleaning. Now the code is slightly backwards to the original version, how about performance now?

CBY-9527 commented 1 year ago

Thank you for your reply. I have trained and tested the DTS (secondiou) by using the current code, the performance of pre-training and self-training is normal. performance of source domain (nuscenes) training: bev AP:84.5587, 73.6351, 73.2074 3d AP:67.8136, 56.2653, 53.347

performance of targe domain (KITTI) self-training: bev AP:91.4464, 79.2710, 77.4142 3d AP:81.2170, 66.0100, 62.2133

However, I still have some small questions. The self-training parameters in the configuration file are different. For example, in nus->kitti, the dts of pv-rcnn and secondiou have different settings for parameters such as epoch, SCORE_THRESH, and PROG_AUG.

wanghangege commented 1 year ago

Thank you for your reply. I have trained and tested the DTS (secondiou) by using the current code, the performance of pre-training and self-training is normal. performance of source domain (nuscenes) training: bev AP:84.5587, 73.6351, 73.2074 3d AP:67.8136, 56.2653, 53.347

performance of targe domain (KITTI) self-training: bev AP:91.4464, 79.2710, 77.4142 3d AP:81.2170, 66.0100, 62.2133

However, I still have some small questions. The self-training parameters in the configuration file are different. For example, in nus->kitti, the dts of pv-rcnn and secondiou have different settings for parameters such as epoch, SCORE_THRESH, and PROG_AUG.

image Hello. How can you tell if your training results are consistent with the table data in the paper? Thank you if you could let me know