open-mmlab / mmdetection3d

OpenMMLab's next-generation platform for general 3D object detection.
https://mmdetection3d.readthedocs.io/en/latest/
Apache License 2.0
5k stars 1.49k forks source link

[Bug] VOD (View-of-Delft) under 3class, when using (3+1) D data (4D radar), the effect is extremely poor or even 0. #2934

Closed Random-junhao closed 3 months ago

Random-junhao commented 3 months ago

Prerequisite

Task

I have modified the scripts/configs, or I'm working on my own tasks/models/datasets.

Branch

main branch https://github.com/open-mmlab/mmdetection3d

Environment

sys.platform: linux Python: 3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0] CUDA available: True MUSA available: False numpy_random_seed: 2147483648 GPU 0: NVIDIA GeForce RTX 3090 CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.3, V11.3.58 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.12.1+cu113 PyTorch compiling details: PyTorch built with:

TorchVision: 0.13.1+cu113 OpenCV: 4.9.0 MMEngine: 0.10.3 MMDetection: 3.3.0 MMDetection3D: 1.4.0+ spconv2.0: False

Reproduces the problem - code sample

bbox_head=dict( type='Anchor3DHead',# 边界框类型 num_classes=3,# 目标类别数 in_channels=384, feat_channels=384,# 特征通道数 use_direction_classifier=True,# 使用方向分类器 assign_per_class=True,# 每个类别独立分配 anchor_generator=dict( type='AlignedAnchor3DRangeGenerator',#生成器类型为 ranges=[[ 0, -39.68,-0.6,69.12,39.68,-0.6], [ 0, -39.68,-0.6,69.12,39.68,-0.6], [0,-39.68,-1.78,69.12, 39.68,-1.78]], sizes=[ [0.8,0.6,1.73], [3.9,1.6,1.56],[1.76,0.6,1.73]

                #[3.9,1.6,1.56],
            ],
        rotations=[ 0,1.57],
        reshape_out=False),
    diff_rad_by_sin=True,# 通过正弦差异度计算方向差异
    bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'),# 边界框编码器类型
    loss_cls=dict(
        type='mmdet.FocalLoss',# 分类损失类型
        use_sigmoid=True,
        gamma=2.0, # Gamma 值
        alpha=0.25,# Alpha 值
        loss_weight=1.0),# 损失权重
    loss_bbox=dict(
        type='mmdet.SmoothL1Loss',
        beta=0.1111111111111111,
        loss_weight=3.0
        #loss_weight=2.0#kitti
        ),
    loss_dir=dict(# 边界框损失类型
        type='mmdet.CrossEntropyLoss', use_sigmoid=False,
        loss_weight=0.2)),# 损失权重

Reproduces the problem - command or script

sudo python tools/train.py configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_vod-3d-3class.py

Reproduces the problem - error message

I use the mmdetection3d framework to do 4D radar point cloud 3D target detection. I use the ./config/pointpillars/pointpillars_hv_secfpn_8xb6-160e_vod—3d-3class.py model to run. The three categories included are cars (car ), Pedestrians and Cyclists. Because the VOD data set is modeled after the KITTI data set, I made my own modifications here. However, the reproduction effect is not satisfactory. Does anyone know the reason for this? Here are the results of my run: ----------- AP11 Results ------------

Pedestrian AP11@0.50, 0.50, 0.50:
bbox AP11:1.0101, 1.1140, 1.1409
bev AP11:0.0000, 0.0000, 0.0000
3d AP11:0.0000, 0.0000, 0.0000
aos AP11:1.01, 1.01, 1.01
Pedestrian AP11@0.50, 0.25, 0.25:
bbox AP11:1.0101, 1.1140, 1.1409
bev AP11:1.2716, 0.8237, 0.8209
3d AP11:0.3696, 0.5321, 0.5262
aos AP11:1.01, 1.01, 1.01
Cyclist AP11@0.50, 0.50, 0.50:
bbox AP11:0.0451, 0.0676, 0.0676
bev AP11:0.0000, 0.0000, 0.0000
3d AP11:0.0000, 0.0000, 0.0000
aos AP11:0.01, 0.02, 0.02
Cyclist AP11@0.50, 0.25, 0.25:
bbox AP11:0.0451, 0.0676, 0.0676
bev AP11:0.0000, 0.0000, 0.0000
3d AP11:0.0000, 0.0000, 0.0000
aos AP11:0.01, 0.02, 0.02
Car AP11@0.70, 0.70, 0.70:
bbox AP11:0.0000, 0.0000, 0.0000
bev AP11:0.0000, 0.0000, 0.0000
3d AP11:0.0000, 0.0000, 0.0000
aos AP11:0.00, 0.00, 0.00
Car AP11@0.70, 0.50, 0.50:
bbox AP11:0.0000, 0.0000, 0.0000
bev AP11:0.0000, 0.0000, 0.0000
3d AP11:0.0000, 0.0000, 0.0000
aos AP11:0.00, 0.00, 0.00

Overall AP11@easy, moderate, hard:
bbox AP11:0.3517, 0.3939, 0.4028
bev AP11:0.0000, 0.0000, 0.0000
3d AP11:0.0000, 0.0000, 0.0000
aos AP11:0.34, 0.34, 0.34

----------- AP40 Results ------------

Pedestrian AP40@0.50, 0.50, 0.50:
bbox AP40:0.5647, 0.8085, 0.7891
bev AP40:0.0000, 0.0000, 0.0000
3d AP40:0.0000, 0.0000, 0.0000
aos AP40:0.27, 0.44, 0.43
Pedestrian AP40@0.50, 0.25, 0.25:
bbox AP40:0.5647, 0.8085, 0.7891
bev AP40:0.6182, 0.6485, 0.6399
3d AP40:0.2920, 0.2784, 0.2768
aos AP40:0.27, 0.44, 0.43
Cyclist AP40@0.50, 0.50, 0.50:
bbox AP40:0.0124, 0.0186, 0.0186
bev AP40:0.0000, 0.0000, 0.0000
3d AP40:0.0000, 0.0000, 0.0000
aos AP40:0.00, 0.01, 0.01
Cyclist AP40@0.50, 0.25, 0.25:
bbox AP40:0.0124, 0.0186, 0.0186
bev AP40:0.0000, 0.0000, 0.0000
3d AP40:0.0000, 0.0000, 0.0000
aos AP40:0.00, 0.01, 0.01
Car AP40@0.70, 0.70, 0.70:
bbox AP40:0.0000, 0.0000, 0.0000
bev AP40:0.0000, 0.0000, 0.0000
3d AP40:0.0000, 0.0000, 0.0000
aos AP40:0.00, 0.00, 0.00
Car AP40@0.70, 0.50, 0.50:
bbox AP40:0.0000, 0.0000, 0.0000
bev AP40:0.0000, 0.0000, 0.0000
3d AP40:0.0000, 0.0000, 0.0000
aos AP40:0.00, 0.00, 0.00

Overall AP40@easy, moderate, hard:
bbox AP40:0.1924, 0.2757, 0.2692
bev AP40:0.0000, 0.0000, 0.0000
3d AP40:0.0000, 0.0000, 0.0000
aos AP40:0.09, 0.15, 0.14

Additional information

I'm hoping someone can help me figure out the reason for the poor training results, I'm using the VOD dataset.