open-mmlab / mmdetection3d

OpenMMLab's next-generation platform for general 3D object detection.
https://mmdetection3d.readthedocs.io/en/latest/
Apache License 2.0
5.16k stars 1.52k forks source link

Poor results of PointRCNN on 3-class KITTI Dataset #1565

Open tony10101105 opened 2 years ago

tony10101105 commented 2 years ago

Hi,

I'm trying to reproduce PointRCNN. I downloaded the KITTI Dataset and pertained weights from README.md in configs/point_rcnn (point_rcnn_2x8_kitti-3d-3classes_20211208_151344.pth). In point_rcnn_2x8_kitti-3d-3classes.py, I also modified the '../base/datasets/kitti-3d-car.py' in base list to '../base/datasets/kitti-3d-3class.py'.

Then, I ran: ./tools/dist_test.sh configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py ./checkpoints/point_rcnn_2x8_kitti-3d-3classes_20211208_151344.pth 3 --out results/kitti/point_rcnn.pkl --eval mAP

But the results were quite poor:

螢幕快照 2022-06-18 上午1 05 07

Results on AP40 and AP11 were both bad. I think KITTI Dataset is fine since I had reproduced SECOND before and the results turned out to be satisfying. Does anyone has any ideas? Thanks!

sillybirrrd commented 2 years ago

Hi, I met the same problem while reproducing the PointRCNN in the Colab. I didn't modify anything and just ran the code below: ! python tools/train.py configs/point_rcnn/point_rcnn_2x8_kitti-3d-3classes.py Things went wrong from the first epoch:

2022-06-28 15:14:08,951 - mmdet - INFO - Epoch [1][1050/3712]   lr: 1.002e-03, eta: 2 days, 7:46:02, time: 0.669, data_time: 0.005, memory: 4663, bbox_loss: 1.0378, semantic_loss: 0.0967, loss_cls: 0.6035, loss_bbox: 0.4432, loss_corner: 48.2661, loss: 50.4473, grad_norm: 8794.3477
2022-06-28 15:14:29,015 - mmdet - INFO - Epoch [1][1080/3712]   lr: 1.002e-03, eta: 2 days, 7:44:22, time: 0.669, data_time: 0.005, memory: 4663, bbox_loss: 1.0433, semantic_loss: 0.0974, loss_cls: 0.5860, loss_bbox: 0.4866, loss_corner: 0.3205, loss: 2.5337, grad_norm: 24.1995
2022-06-28 15:14:48,951 - mmdet - INFO - Epoch [1][1110/3712]   lr: 1.002e-03, eta: 2 days, 7:42:13, time: 0.665, data_time: 0.006, memory: 4663, bbox_loss: 1.0416, semantic_loss: 0.0996, loss_cls: 0.5907, loss_bbox: 0.4486, loss_corner: 0.1719, loss: 2.3524, grad_norm: 7.4220
2022-06-28 15:15:09,090 - mmdet - INFO - Epoch [1][1140/3712]   lr: 1.002e-03, eta: 2 days, 7:41:02, time: 0.671, data_time: 0.006, memory: 4663, bbox_loss: 1.0228, semantic_loss: 0.0953, loss_cls: 0.6039, loss_bbox: 0.4427, loss_corner: 0.1228, loss: 2.2876, grad_norm: 4.2769
2022-06-28 15:15:29,125 - mmdet - INFO - Epoch [1][1170/3712]   lr: 1.002e-03, eta: 2 days, 7:39:27, time: 0.668, data_time: 0.005, memory: 4663, bbox_loss: 1.0216, semantic_loss: 0.0890, loss_cls: 0.6066, loss_bbox: 0.3920, loss_corner: 0.1147, loss: 2.2239, grad_norm: 4.1757
2022-06-28 15:15:49,093 - mmdet - INFO - Epoch [1][1200/3712]   lr: 1.002e-03, eta: 2 days, 7:37:39, time: 0.666, data_time: 0.006, memory: 4663, bbox_loss: 1.0113, semantic_loss: 0.0917, loss_cls: 0.5958, loss_bbox: 0.3984, loss_corner: 0.1307, loss: 2.2279, grad_norm: 7.4968
2022-06-28 15:16:09,144 - mmdet - INFO - Epoch [1][1230/3712]   lr: 1.002e-03, eta: 2 days, 7:36:16, time: 0.668, data_time: 0.006, memory: 4663, bbox_loss: 0.9994, semantic_loss: 0.0906, loss_cls: 0.5769, loss_bbox: 0.4106, loss_corner: 0.1459, loss: 2.2234, grad_norm: 7.5141
2022-06-28 15:16:29,177 - mmdet - INFO - Epoch [1][1260/3712]   lr: 1.002e-03, eta: 2 days, 7:34:51, time: 0.668, data_time: 0.006, memory: 4663, bbox_loss: 0.9867, semantic_loss: 0.0893, loss_cls: 0.5831, loss_bbox: 0.4195, loss_corner: 0.1365, loss: 2.2151, grad_norm: 4.6261
2022-06-28 15:16:49,112 - mmdet - INFO - Epoch [1][1290/3712]   lr: 1.003e-03, eta: 2 days, 7:33:07, time: 0.664, data_time: 0.005, memory: 4663, bbox_loss: 0.9738, semantic_loss: 0.0985, loss_cls: 0.5870, loss_bbox: 0.3961, loss_corner: 0.1182, loss: 2.1736, grad_norm: 5.0624
2022-06-28 15:17:09,152 - mmdet - INFO - Epoch [1][1320/3712]   lr: 1.003e-03, eta: 2 days, 7:31:51, time: 0.668, data_time: 0.005, memory: 4663, bbox_loss: 0.9502, semantic_loss: 0.0872, loss_cls: 0.5810, loss_bbox: 0.3719, loss_corner: 0.1062, loss: 2.0965, grad_norm: 3.5973
2022-06-28 15:17:29,244 - mmdet - INFO - Epoch [1][1350/3712]   lr: 1.003e-03, eta: 2 days, 7:30:48, time: 0.670, data_time: 0.006, memory: 4663, bbox_loss: 1.0032, semantic_loss: 0.0911, loss_cls: 0.5945, loss_bbox: 0.4862, loss_corner: 0.3848, loss: 2.5597, grad_norm: 37.2284
2022-06-28 15:17:49,210 - mmdet - INFO - Epoch [1][1380/3712]   lr: 1.003e-03, eta: 2 days, 7:29:20, time: 0.666, data_time: 0.005, memory: 4663, bbox_loss: 0.9699, semantic_loss: 0.0915, loss_cls: 0.5889, loss_bbox: 0.5508, loss_corner: 0.2513, loss: 2.4524, grad_norm: 5.0381
2022-06-28 15:18:09,228 - mmdet - INFO - Epoch [1][1410/3712]   lr: 1.003e-03, eta: 2 days, 7:28:06, time: 0.667, data_time: 0.005, memory: 4663, bbox_loss: 0.9153, semantic_loss: 0.0883, loss_cls: 0.5952, loss_bbox: 0.4015, loss_corner: 1.9026, loss: 3.9028, grad_norm: 303.6886
2022-06-28 15:18:29,284 - mmdet - INFO - Epoch [1][1440/3712]   lr: 1.003e-03, eta: 2 days, 7:27:02, time: 0.669, data_time: 0.005, memory: 4663, bbox_loss: 0.9129, semantic_loss: 0.0866, loss_cls: 0.5911, loss_bbox: 0.4803, loss_corner: 0.3135, loss: 2.3845, grad_norm: 4.1048
2022-06-28 15:18:49,610 - mmdet - INFO - Epoch [1][1470/3712]   lr: 1.003e-03, eta: 2 days, 7:26:54, time: 0.678, data_time: 0.005, memory: 4663, bbox_loss: 0.9838, semantic_loss: 0.1008, loss_cls: 0.5897, loss_bbox: 0.5671, loss_corner: 1.8919, loss: 4.1333, grad_norm: 274.4772
2022-06-28 15:19:09,654 - mmdet - INFO - Epoch [1][1500/3712]   lr: 1.004e-03, eta: 2 days, 7:25:49, time: 0.668, data_time: 0.005, memory: 4663, bbox_loss: 0.9194, semantic_loss: 0.0920, loss_cls: 0.5920, loss_bbox: 0.4762, loss_corner: 0.2451, loss: 2.3248, grad_norm: 4.3487
2022-06-28 15:19:29,633 - mmdet - INFO - Epoch [1][1530/3712]   lr: 1.004e-03, eta: 2 days, 7:24:35, time: 0.666, data_time: 0.005, memory: 4663, bbox_loss: 0.9465, semantic_loss: 0.0933, loss_cls: 0.5815, loss_bbox: 0.5254, loss_corner: 3.8213, loss: 5.9681, grad_norm: 607.2803
2022-06-28 15:19:49,613 - mmdet - INFO - Epoch [1][1560/3712]   lr: 1.004e-03, eta: 2 days, 7:23:22, time: 0.666, data_time: 0.005, memory: 4663, bbox_loss: 0.9480, semantic_loss: 0.0864, loss_cls: 0.5980, loss_bbox: 0.7677, loss_corner: 0.5075, loss: 2.9076, grad_norm: 20.0053
2022-06-28 15:20:09,703 - mmdet - INFO - Epoch [1][1590/3712]   lr: 1.004e-03, eta: 2 days, 7:22:32, time: 0.670, data_time: 0.005, memory: 4663, bbox_loss: 0.9628, semantic_loss: 0.0895, loss_cls: 0.5946, loss_bbox: 0.5274, loss_corner: 0.2568, loss: 2.4310, grad_norm: 5.4769
2022-06-28 15:20:29,791 - mmdet - INFO - Epoch [1][1620/3712]   lr: 1.004e-03, eta: 2 days, 7:21:43, time: 0.670, data_time: 0.006, memory: 4663, bbox_loss: 0.8967, semantic_loss: 0.0884, loss_cls: 0.6082, loss_bbox: 0.4854, loss_corner: 0.1772, loss: 2.2559, grad_norm: 5.4091
2022-06-28 15:20:49,756 - mmdet - INFO - Epoch [1][1650/3712]   lr: 1.004e-03, eta: 2 days, 7:20:32, time: 0.666, data_time: 0.005, memory: 4663, bbox_loss: 0.8880, semantic_loss: 0.0917, loss_cls: 0.5982, loss_bbox: 0.3574, loss_corner: 0.1468, loss: 2.0820, grad_norm: 4.4186
2022-06-28 15:21:09,783 - mmdet - INFO - Epoch [1][1680/3712]   lr: 1.004e-03, eta: 2 days, 7:19:34, time: 0.668, data_time: 0.006, memory: 4663, bbox_loss: 0.8899, semantic_loss: 0.0883, loss_cls: 0.6022, loss_bbox: 0.3625, loss_corner: 0.1396, loss: 2.0824, grad_norm: 5.1678
2022-06-28 15:21:29,968 - mmdet - INFO - Epoch [1][1710/3712]   lr: 1.005e-03, eta: 2 days, 7:19:05, time: 0.673, data_time: 0.005, memory: 4663, bbox_loss: 0.8365, semantic_loss: 0.0827, loss_cls: 0.5890, loss_bbox: 0.3544, loss_corner: 0.1348, loss: 1.9974, grad_norm: 5.3038
2022-06-28 15:21:49,943 - mmdet - INFO - Epoch [1][1740/3712]   lr: 1.005e-03, eta: 2 days, 7:18:01, time: 0.666, data_time: 0.005, memory: 4663, bbox_loss: 0.9046, semantic_loss: 0.0959, loss_cls: 0.5877, loss_bbox: 0.3839, loss_corner: 0.1077, loss: 2.0799, grad_norm: 3.7638
2022-06-28 15:22:09,919 - mmdet - INFO - Epoch [1][1770/3712]   lr: 1.005e-03, eta: 2 days, 7:16:58, time: 0.666, data_time: 0.006, memory: 4663, bbox_loss: 0.8711, semantic_loss: 0.0870, loss_cls: 0.5767, loss_bbox: 0.3923, loss_corner: 0.1168, loss: 2.0439, grad_norm: 3.9317
2022-06-28 15:22:29,910 - mmdet - INFO - Epoch [1][1800/3712]   lr: 1.005e-03, eta: 2 days, 7:16:00, time: 0.666, data_time: 0.006, memory: 4663, bbox_loss: 0.9079, semantic_loss: 0.0878, loss_cls: 0.5746, loss_bbox: 0.4181, loss_corner: 0.1683, loss: 2.1567, grad_norm: 8.0817
2022-06-28 15:22:49,938 - mmdet - INFO - Epoch [1][1830/3712]   lr: 1.005e-03, eta: 2 days, 7:15:08, time: 0.668, data_time: 0.006, memory: 4663, bbox_loss: 0.8222, semantic_loss: 0.0821, loss_cls: 0.5829, loss_bbox: 0.3561, loss_corner: 0.1353, loss: 1.9786, grad_norm: 3.6453
2022-06-28 15:23:10,090 - mmdet - INFO - Epoch [1][1860/3712]   lr: 1.005e-03, eta: 2 days, 7:14:37, time: 0.672, data_time: 0.005, memory: 4663, bbox_loss: 0.8381, semantic_loss: 0.0876, loss_cls: 0.5922, loss_bbox: 0.3448, loss_corner: 0.1079, loss: 1.9705, grad_norm: 3.3135
2022-06-28 15:23:30,243 - mmdet - INFO - Epoch [1][1890/3712]   lr: 1.006e-03, eta: 2 days, 7:14:07, time: 0.672, data_time: 0.005, memory: 4663, bbox_loss: 0.9006, semantic_loss: 0.0910, loss_cls: 0.5944, loss_bbox: 0.4726, loss_corner: 0.7132, loss: 2.7719, grad_norm: 56.3988
2022-06-28 15:23:50,287 - mmdet - INFO - Epoch [1][1920/3712]   lr: 1.006e-03, eta: 2 days, 7:13:20, time: 0.668, data_time: 0.006, memory: 4663, bbox_loss: 0.8617, semantic_loss: 0.0785, loss_cls: 0.5841, loss_bbox: 0.3942, loss_corner: 0.1423, loss: 2.0607, grad_norm: 3.2757
2022-06-28 15:24:10,443 - mmdet - INFO - Epoch [1][1950/3712]   lr: 1.006e-03, eta: 2 days, 7:12:51, time: 0.672, data_time: 0.005, memory: 4663, bbox_loss: 0.8751, semantic_loss: 0.0855, loss_cls: 0.5897, loss_bbox: 0.4299, loss_corner: 0.1885, loss: 2.1686, grad_norm: 12.4038
2022-06-28 15:24:30,529 - mmdet - INFO - Epoch [1][1980/3712]   lr: 1.006e-03, eta: 2 days, 7:12:12, time: 0.670, data_time: 0.005, memory: 4663, bbox_loss: 0.8527, semantic_loss: 0.0799, loss_cls: 0.5875, loss_bbox: 0.4615, loss_corner: 0.5622, loss: 2.5439, grad_norm: 65.5213
2022-06-28 15:24:50,538 - mmdet - INFO - Epoch [1][2010/3712]   lr: 1.006e-03, eta: 2 days, 7:11:22, time: 0.667, data_time: 0.006, memory: 4663, bbox_loss: 0.8740, semantic_loss: 0.0825, loss_cls: 0.5889, loss_bbox: 0.4086, loss_corner: 0.1248, loss: 2.0787, grad_norm: 3.4692
2022-06-28 15:25:10,416 - mmdet - INFO - Epoch [1][2040/3712]   lr: 1.007e-03, eta: 2 days, 7:10:14, time: 0.663, data_time: 0.005, memory: 4663, bbox_loss: 0.8211, semantic_loss: 0.0819, loss_cls: 0.5990, loss_bbox: 0.4611, loss_corner: 229.7156, loss: 231.6788, grad_norm: 30899.2806

You can see that in some iterations the total loss and grad_norm diverge dramatically, but it is still fine since the model still 'learns'. However, at the second epoch, the model collapsed:

2022-06-28 16:15:03,978 - mmdet - INFO - Epoch [2][2790/3712]   lr: 1.066e-03, eta: 2 days, 6:00:26, time: 0.664, data_time: 0.005, memory: 4663, bbox_loss: 0.5772, semantic_loss: 0.0599, loss_cls: 0.5440, loss_bbox: 0.2615, loss_corner: 0.0466, loss: 1.4892, grad_norm: 1.6312
2022-06-28 16:15:23,742 - mmdet - INFO - Epoch [2][2820/3712]   lr: 1.067e-03, eta: 2 days, 5:59:52, time: 0.659, data_time: 0.005, memory: 4663, bbox_loss: 0.5919, semantic_loss: 0.0578, loss_cls: 0.5427, loss_bbox: 0.2527, loss_corner: 0.0451, loss: 1.4902, grad_norm: 1.7961
2022-06-28 16:15:43,625 - mmdet - INFO - Epoch [2][2850/3712]   lr: 1.068e-03, eta: 2 days, 5:59:23, time: 0.663, data_time: 0.005, memory: 4663, bbox_loss: 0.6300, semantic_loss: 0.0649, loss_cls: 0.5430, loss_bbox: 0.3059, loss_corner: 19426.1580, loss: 19427.7019, grad_norm: 9655975.2565
2022-06-28 16:16:03,481 - mmdet - INFO - Epoch [2][2880/3712]   lr: 1.068e-03, eta: 2 days, 5:58:53, time: 0.662, data_time: 0.005, memory: 4663, bbox_loss: 0.5737, semantic_loss: 0.0629, loss_cls: 0.5389, loss_bbox: 0.3841, loss_corner: inf, loss: inf, grad_norm: 2.7262
/usr/local/lib/python3.7/dist-packages/mmcv/runner/hooks/optimizer.py:50: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior.
  return clip_grad.clip_grad_norm_(params, **self.grad_clip)
2022-06-28 16:16:23,290 - mmdet - INFO - Epoch [2][2910/3712]   lr: 1.069e-03, eta: 2 days, 5:58:21, time: 0.660, data_time: 0.005, memory: 4663, bbox_loss: 0.5935, semantic_loss: 0.0629, loss_cls: nan, loss_bbox: nan, loss_corner: nan, loss: nan, grad_norm: nan
2022-06-28 16:16:43,035 - mmdet - INFO - Epoch [2][2940/3712]   lr: 1.069e-03, eta: 2 days, 5:57:47, time: 0.658, data_time: 0.006, memory: 4663, bbox_loss: 0.6095, semantic_loss: 0.0667, loss_cls: nan, loss_bbox: nan, loss_corner: nan, loss: nan, grad_norm: nan
2022-06-28 16:17:02,797 - mmdet - INFO - Epoch [2][2970/3712]   lr: 1.070e-03, eta: 2 days, 5:57:13, time: 0.659, data_time: 0.005, memory: 4663, bbox_loss: 0.5640, semantic_loss: 0.0624, loss_cls: nan, loss_bbox: nan, loss_corner: nan, loss: nan, grad_norm: nan
2022-06-28 16:17:22,623 - mmdet - INFO - Epoch [2][3000/3712]   lr: 1.071e-03, eta: 2 days, 5:56:42, time: 0.661, data_time: 0.005, memory: 4663, bbox_loss: 0.5915, semantic_loss: 0.0607, loss_cls: nan, loss_bbox: nan, loss_corner: nan, loss: nan, grad_norm: nan
2022-06-28 16:17:42,526 - mmdet - INFO - Epoch [2][3030/3712]   lr: 1.071e-03, eta: 2 days, 5:56:15, time: 0.663, data_time: 0.006, memory: 4663, bbox_loss: 0.5859, semantic_loss: 0.0609, loss_cls: nan, loss_bbox: nan, loss_corner: nan, loss: nan, grad_norm: nan
2022-06-28 16:18:02,451 - mmdet - INFO - Epoch [2][3060/3712]   lr: 1.072e-03, eta: 2 days, 5:55:48, time: 0.664, data_time: 0.005, memory: 4663, bbox_loss: 0.5654, semantic_loss: 0.0612, loss_cls: nan, loss_bbox: nan, loss_corner: nan, loss: nan, grad_norm: nan
2022-06-28 16:18:22,237 - mmdet - INFO - Epoch [2][3090/3712]   lr: 1.073e-03, eta: 2 days, 5:55:16, time: 0.660, data_time: 0.005, memory: 4663, bbox_loss: 0.5443, semantic_loss: 0.0628, loss_cls: nan, loss_bbox: nan, loss_corner: nan, loss: nan, grad_norm: nan
2022-06-28 16:18:41,980 - mmdet - INFO - Epoch [2][3120/3712]   lr: 1.073e-03, eta: 2 days, 5:54:42, time: 0.658, data_time: 0.005, memory: 4663, bbox_loss: 0.5704, semantic_loss: 0.0585, loss_cls: nan, loss_bbox: nan, loss_corner: nan, loss: nan, grad_norm: nan
2022-06-28 16:19:01,851 - mmdet - INFO - Epoch [2][3150/3712]   lr: 1.074e-03, eta: 2 days, 5:54:14, time: 0.662, data_time: 0.005, memory: 4663, bbox_loss: 0.5856, semantic_loss: 0.0604, loss_cls: nan, loss_bbox: nan, loss_corner: nan, loss: nan, grad_norm: nan

The losses of second stage becomes nan after this message: /usr/local/lib/python3.7/dist-packages/mmcv/runner/hooks/optimizer.py:50: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior. Is it a bug or is there something need to be modified but I missed?

jiangziben commented 2 years ago

Hi, I met the same problem while reproducing the SECOND, the results were quite poor. Have you solved it? ----------- AP11 Results ------------

Car AP11@0.70, 0.70, 0.70: bbox AP11:9.9299, 10.3669, 10.6121 bev AP11:0.0433, 0.1324, 0.1343 3d AP11:0.0366, 0.0930, 0.0924 aos AP11:3.23, 4.70, 4.81 Car AP11@0.70, 0.50, 0.50: bbox AP11:9.9299, 10.3669, 10.6121 bev AP11:0.1352, 0.3300, 0.3601 3d AP11:0.1266, 0.3040, 0.2868 aos AP11:3.23, 4.70, 4.81

----------- AP40 Results ------------

Car AP40@0.70, 0.70, 0.70: bbox AP40:1.3368, 1.9814, 2.3379 bev AP40:0.0119, 0.0364, 0.0369 3d AP40:0.0101, 0.0256, 0.0254 aos AP40:0.51, 0.75, 0.90 Car AP40@0.70, 0.50, 0.50: bbox AP40:1.3368, 1.9814, 2.3379 bev AP40:0.0743, 0.2669, 0.2871 3d AP40:0.0680, 0.2434, 0.1577 aos AP40:0.51, 0.75, 0.90

VVsssssk commented 1 year ago

Hi, I have to fix it in https://github.com/open-mmlab/mmdetection3d/pull/1874. And I will update PointRCNN checkpoints in the model zoo later.

vahdat-ab commented 1 year ago

I think the nan issue with PointRCNN is still there. I am using rc5 and after a few iterations, the nan appears. image

mihudaner commented 1 year ago

the demo of point-rcnn is also error result image

VVsssssk commented 1 year ago

I think the nan issue with PointRCNN is still there. I am using rc5 and after a few iterations, the nan appears. image

Could you provide your env? I will recheck PointRCNN.

VVsssssk commented 1 year ago

the demo of point-rcnn is also error result image

I will update the model zoo checkpoint later.

Biblbrox commented 3 months ago

Hello, VVsssssk! Are there any updates on this issue? I have the same problem and it seems to be that weights haven't been updated yet.

This issue still can be reproduced on the main branch. Is it the newest URL for the weights?

https://download.openmmlab.com/mmdetection3d/v1.0.0_models/point_rcnn/point_rcnn_2x8_kitti-3d-3classes_20211208_151344.pth

P.S. I have checked PointRCNN model in OpenPCDet and it works just fine. For some reason, PointRCNN in mmdetection3d is broken. I'm trying to investigate the reason. It doesn't look like a problem with coordinates, but like an issue with weights.