THU-DA-6D-Pose-Group / GDR-Net

GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation. (CVPR 2021)
https://github.com/THU-DA-6D-Pose-Group/GDR-Net
Apache License 2.0
277 stars 47 forks source link

Unable to replicate performance by my own training #102

Closed shreyesss closed 1 year ago

shreyesss commented 1 year ago

We trained GDR-Net using the configuration specified (without any modifications). The checkpoint thus obtained leads to sub-par performance . We are able to get the performance mentioned in the paper using the checkpoint provided by the authors , but we are unable to replicate the exact same result .

Results from our trained checkpoint : objects ape can cat driller duck eggbox glue holepuncher Avg(8) ad_2 0.43 6.30 0.67 7.17 0.26 0.09 8.99 0.00 2.99 ad_5 13.59 52.86 11.63 38.06 10.32 11.87 44.51 22.48 25.66 ad_10 44.27 80.86 29.40 68.12 39.63 38.51 73.14 59.34 54.16 rete_2 8.89 15.33 6.82 12.03 4.11 0.09 1.44 3.06 6.47 rete_5 53.16 69.43 33.19 56.59 22.57 8.11 42.18 50.99 42.03 rete_10 78.97 92.79 59.65 81.88 64.57 46.88 76.14 92.64 74.19 re_2 8.97 16.74 8.85 15.49 4.46 0.09 2.22 4.13 7.62 re_5 53.42 71.25 34.46 58.81 23.53 10.67 43.06 51.40 43.33 re_10 79.06 94.45 61.16 82.70 65.09 47.22 76.47 92.98 74.89 te_2 73.08 81.69 40.61 59.23 74.02 14.09 47.06 75.70 58.18 te_5 86.67 92.38 66.13 83.61 87.05 45.52 79.69 93.55 79.32 te_10 89.23 95.69 77.93 92.09 91.69 69.00 85.57 97.36 87.32 proj_2 16.75 23.36 20.98 13.59 14.96 2.31 12.76 7.27 14.00 proj_5 65.56 82.27 61.08 63.10 71.04 43.72 58.16 81.49 65.80 proj_10 80.68 97.18 75.74 86.33 87.49 64.05 82.02 95.87 83.67 re 13.35 4.68 19.92 9.23 15.78 16.45 10.55 7.71 12.21 te 0.06 0.02 0.07 0.03 0.04 0.10 0.06 0.02 0.05

Results from authors checkpoint (same machine) : objects ape can cat driller duck eggbox glue holepuncher Avg(8) ad_2 0.34 6.38 0.67 7.66 0.26 0.43 8.66 0.08 3.06 ad_5 12.99 49.79 12.05 39.13 11.20 15.88 42.95 26.36 26.29 ad_10 44.87 79.70 30.50 67.79 39.90 49.87 73.70 62.73 56.13 rete_2 8.97 12.34 6.91 9.56 3.59 0.09 3.11 3.80 6.05 rete_5 53.08 66.61 31.68 58.65 20.30 6.15 42.84 51.24 41.32 rete_10 78.63 92.05 58.72 81.05 65.09 39.37 76.25 90.66 72.73 re_2 9.32 13.01 9.01 12.60 4.02 0.09 4.66 4.71 7.18 re_5 53.25 68.93 32.77 60.38 20.82 6.23 44.06 51.65 42.26 re_10 78.80 94.28 59.48 82.70 65.62 39.62 76.80 91.40 73.59 te_2 71.88 80.20 40.10 58.90 70.95 20.50 44.40 78.02 58.12 te_5 86.75 90.39 66.72 83.44 85.30 57.81 80.36 93.47 80.53 te_10 88.97 94.86 79.44 91.27 89.68 70.11 85.90 96.20 87.06 proj_2 15.81 20.88 18.53 10.46 15.05 1.71 14.32 8.43 13.15 proj_5 65.73 80.53 59.65 61.04 69.99 38.17 58.60 81.49 64.40 proj_10 80.09 97.18 73.88 84.27 86.70 64.30 81.80 95.21 82.93 re 14.97 4.79 18.28 10.03 15.25 17.32 10.10 8.58 12.41 te 0.06 0.02 0.07 0.04 0.05 0.09 0.06 0.02 0.05

As evident there is a considerable mismatch and drop in performance (ad_10 , ad_5 etc) . Please let us know why that might be happening and how can we rectify that . Thanks

wangg12 commented 1 year ago

It is hard to say. Maybe the environment is not exactly the same as mine. Since it has been more than two years, the library versions evolve a lot.

Besides, the difference of the numbers here is not very significant in my view. You can try training with more epochs or downgrading the libraries.

BTW, LMO's labels are not very accurate. So it is actually not suitable for benchmarking. You can try the BOP setting and other BOP datasets, because their test sets have more accurate annotations.