open-mmlab / mmdetection3d

OpenMMLab's next-generation platform for general 3D object detection.
https://mmdetection3d.readthedocs.io/en/latest/
Apache License 2.0
5.25k stars 1.54k forks source link

[Bug] Meet different errors when Training MVXNet #1970

Open hellohaozheng opened 1 year ago

hellohaozheng commented 1 year ago

Prerequisite

Task

I'm using the official example scripts/configs for the officially supported tasks/models/datasets.

Branch

master branch https://github.com/open-mmlab/mmdetection3d

Environment

sys.platform: linux Python: 3.8.13 (default, Oct 21 2022, 23:50:54) [GCC 11.2.0] CUDA available: True GPU 0,1,2,3: NVIDIA GeForce RTX 3090 CUDA_HOME: /usr NVCC: Cuda compilation tools, release 11.5, V11.5.119 GCC: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 PyTorch: 1.10.1 PyTorch compiling details: PyTorch built with:

TorchVision: 0.11.2 OpenCV: 4.6.0 MMCV: 1.6.2 MMCV Compiler: GCC 9.3 MMCV CUDA Compiler: 11.3 MMDetection: 2.25.3 MMSegmentation: 0.29.0 MMDetection3D: 1.0.0rc4+9556958

Reproduces the problem - code sample

Here is the mvx_config which I used.

_base_ = ['../_base_/schedules/cosine.py', '../_base_/default_runtime.py']

# model settings
voxel_size = [0.05, 0.05, 0.1]
point_cloud_range = [0, -40, -3, 70.4, 40, 1]
model = dict(
    type='DynamicMVXFasterRCNN',
    img_backbone=dict(
        type='ResNet',
        depth=50,
        num_stages=4,
        out_indices=(0, 1, 2, 3),
        frozen_stages=1,
        norm_cfg=dict(type='BN', requires_grad=False),
        norm_eval=True,
        style='caffe'),
    img_neck=dict(
        type='FPN',
        in_channels=[256, 512, 1024, 2048],
        out_channels=256,
        num_outs=5),
    pts_voxel_layer=dict(
        max_num_points=-1,
        point_cloud_range=point_cloud_range,
        voxel_size=voxel_size,
        max_voxels=(-1, -1),
    ),
    pts_voxel_encoder=dict(
        type='DynamicVFE',
        in_channels=4,
        feat_channels=[64, 64],
        with_distance=False,
        voxel_size=voxel_size,
        with_cluster_center=True,
        with_voxel_center=True,
        point_cloud_range=point_cloud_range,
        fusion_layer=dict(
            type='PointFusion',
            img_channels=256,
            pts_channels=64,
            mid_channels=128,
            out_channels=128,
            img_levels=[0, 1, 2, 3, 4],
            align_corners=False,
            activate_out=True,
            fuse_out=False)),
    pts_middle_encoder=dict(
        type='SparseEncoder',
        in_channels=128,
        sparse_shape=[41, 1600, 1408],
        order=('conv', 'norm', 'act')),
    pts_backbone=dict(
        type='SECOND',
        in_channels=256,
        layer_nums=[5, 5],
        layer_strides=[1, 2],
        out_channels=[128, 256]),
    pts_neck=dict(
        type='SECONDFPN',
        in_channels=[128, 256],
        upsample_strides=[1, 2],
        out_channels=[256, 256]),
    pts_bbox_head=dict(
        type='Anchor3DHead',
        num_classes=3,
        in_channels=512,
        feat_channels=512,
        use_direction_classifier=True,
        anchor_generator=dict(
            type='Anchor3DRangeGenerator',
            ranges=[
                [0, -40.0, -0.6, 70.4, 40.0, -0.6],
                [0, -40.0, -0.6, 70.4, 40.0, -0.6],
                [0, -40.0, -1.78, 70.4, 40.0, -1.78],
            ],
            sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]],
            rotations=[0, 1.57],
            reshape_out=False),
        assigner_per_size=True,
        diff_rad_by_sin=True,
        assign_per_class=True,
        bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'),
        loss_cls=dict(
            type='FocalLoss',
            use_sigmoid=True,
            gamma=2.0,
            alpha=0.25,
            loss_weight=1.0),
        loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0),
        loss_dir=dict(
            type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)),
    # model training and testing settings
    train_cfg=dict(
        pts=dict(
            assigner=[
                dict(  # for Pedestrian
                    type='MaxIoUAssigner',
                    iou_calculator=dict(type='BboxOverlapsNearest3D'),
                    pos_iou_thr=0.35,
                    neg_iou_thr=0.2,
                    min_pos_iou=0.2,
                    ignore_iof_thr=-1),
                dict(  # for Cyclist
                    type='MaxIoUAssigner',
                    iou_calculator=dict(type='BboxOverlapsNearest3D'),
                    pos_iou_thr=0.35,
                    neg_iou_thr=0.2,
                    min_pos_iou=0.2,
                    ignore_iof_thr=-1),
                dict(  # for Car
                    type='MaxIoUAssigner',
                    iou_calculator=dict(type='BboxOverlapsNearest3D'),
                    pos_iou_thr=0.6,
                    neg_iou_thr=0.45,
                    min_pos_iou=0.45,
                    ignore_iof_thr=-1),
            ],
            allowed_border=0,
            pos_weight=-1,
            debug=False)),
    test_cfg=dict(
        pts=dict(
            use_rotate_nms=True,
            nms_across_levels=False,
            nms_thr=0.01,
            score_thr=0.1,
            min_bbox_size=0,
            nms_pre=100,
            max_num=50)))

# dataset settings
dataset_type = 'KittiDataset'
data_root = '/data/hhz/kitti/'
class_names = ["Pedestrian", "Cyclist", "Car"]
img_norm_cfg = dict(
    mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
input_modality = dict(use_lidar=True, use_camera=True)
train_pipeline = [
    dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4),
    dict(type='LoadImageFromFile'),
    dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True),
    dict(
        type='Resize',
        img_scale=[(640, 192), (2560, 768)],
        multiscale_mode='range',
        keep_ratio=True),
    dict(
        type='GlobalRotScaleTrans',
        rot_range=[-0.78539816, 0.78539816],
        scale_ratio_range=[0.95, 1.05],
        translation_std=[0.2, 0.2, 0.2]),
    dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5),
    dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range),
    dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range),
    dict(type='PointShuffle'),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='Pad', size_divisor=32),
    dict(type='DefaultFormatBundle3D', class_names=class_names),
    dict(
        type='Collect3D',
        keys=['points', 'img', 'gt_bboxes_3d', 'gt_labels_3d']),
]
test_pipeline = [
    dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4),
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug3D',
        img_scale=(1280, 384),
        pts_scale_ratio=1,
        flip=False,
        transforms=[
            dict(type='Resize', multiscale_mode='value', keep_ratio=True),
            dict(
                type='GlobalRotScaleTrans',
                rot_range=[0, 0],
                scale_ratio_range=[1., 1.],
                translation_std=[0, 0, 0]),
            dict(type='RandomFlip3D'),
            dict(type='Normalize', **img_norm_cfg),
            dict(type='Pad', size_divisor=32),
            dict(
                type='PointsRangeFilter', point_cloud_range=point_cloud_range),
            dict(
                type='DefaultFormatBundle3D',
                class_names=class_names,
                with_label=False),
            dict(type='Collect3D', keys=['points', 'img'])
        ])
]
# construct a pipeline for data and gt loading in show function
# please keep its loading function consistent with test_pipeline (e.g. client)
eval_pipeline = [
    dict(type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4),
    dict(type='LoadImageFromFile'),
    dict(
        type='DefaultFormatBundle3D',
        class_names=class_names,
        with_label=False),
    dict(type='Collect3D', keys=['points', 'img'])
]

data = dict(
    samples_per_gpu=2,
    workers_per_gpu=2,
    train=dict(
        type='RepeatDataset',
        times=2,
        dataset=dict(
            type=dataset_type,
            data_root=data_root,
            ann_file=data_root + 'kitti_infos_train.pkl',
            split='training',
            pts_prefix='velodyne_reduced',
            pipeline=train_pipeline,
            modality=input_modality,
            classes=class_names,
            test_mode=False,
            box_type_3d='LiDAR')),
    val=dict(
        type=dataset_type,
        data_root=data_root,
        ann_file=data_root + 'kitti_infos_val.pkl',
        split='training',
        pts_prefix='velodyne_reduced',
        pipeline=test_pipeline,
        modality=input_modality,
        classes=class_names,
        test_mode=True,
        box_type_3d='LiDAR'),
    test=dict(
        type=dataset_type,
        data_root=data_root,
        ann_file=data_root + 'kitti_infos_val.pkl',
        split='training',
        pts_prefix='velodyne_reduced',
        pipeline=test_pipeline,
        modality=input_modality,
        classes=class_names,
        test_mode=True,
        box_type_3d='LiDAR'))

# Training settings
optimizer = dict(weight_decay=0.01)
# max_norm=10 is better for SECOND
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))

evaluation = dict(interval=1, pipeline=eval_pipeline)

# You may need to download the model first is the network is unstable
# load_from = 'https://download.openmmlab.com/mmdetection3d/pretrain_models/mvx_faster_rcnn_detectron2-caffe_20e_coco-pretrain_gt-sample_kitti-3-class_moderate-79.3_20200207-a4a6a3c7.pth'  # noqa
load_from = 'pre_trained/mvx_faster_rcnn_detectron2-caffe_20e_coco-pretrain_gt-sample_kitti-3-class_moderate-79.3_20200207-a4a6a3c7.pth'

And I didn't change tools/train.py.

Reproduces the problem - command or script

When I trained the mvxnet on 1 GPU, I used such command as follows.

python3 tools/train.py configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py

When I trained the mvxnet on multiple GPUs, I used such command as follows.

./tools/dist_train.sh configs/mvxnet/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class.py 4 --work-dir my_checkpoints

Reproduces the problem - error message

When I trained it on a GPU, I got the error report.

2022-10-30 10:36:52,288 - mmdet - INFO - workflow: [('train', 1)], max: 40 epochs
2022-10-30 10:36:52,288 - mmdet - INFO - Checkpoints will be saved to /home/hhz/code/detection/mmdetection3d/work_dirs/dv_mvx-fpn_second_secfpn_adamw_2x8_80e_kitti-3d-3class by HardDiskBackend.
Traceback (most recent call last):
  File "tools/train.py", line 262, in <module>
    main()
  File "tools/train.py", line 251, in main
    train_model(
  File "/home/hhz/code/detection/mmdetection3d/mmdet3d/apis/train.py", line 344, in train_model
    train_detector(
  File "/home/hhz/code/detection/mmdetection3d/mmdet3d/apis/train.py", line 319, in train_detector
    runner.run(data_loaders, cfg.workflow)
  File "/home/hhz/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/home/hhz/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 49, in train
    for i, data_batch in enumerate(self.data_loader):
  File "/home/hhz/anaconda3/envs/openmmlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__
    return self._get_iterator()
  File "/home/hhz/anaconda3/envs/openmmlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "/home/hhz/anaconda3/envs/openmmlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 944, in __init__
    self._reset(loader, first_iter=True)
  File "/home/hhz/anaconda3/envs/openmmlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 975, in _reset
    self._try_put_index()
  File "/home/hhz/anaconda3/envs/openmmlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1209, in _try_put_index
    index = self._next_index()
  File "/home/hhz/anaconda3/envs/openmmlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 512, in _next_index
    return next(self._sampler_iter)  # may raise StopIteration
  File "/home/hhz/anaconda3/envs/openmmlab/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 229, in __iter__
    for idx in self.sampler:
  File "/home/hhz/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmdet/datasets/samplers/group_sampler.py", line 36, in __iter__
    indices = np.concatenate(indices)
  File "<__array_function__ internals>", line 180, in concatenate
ValueError: need at least one array to concatenate

When I trained it on multiple GPUs, I got a strange result in the validation.

[>>>>>>>>>>>>>>>>>>>>>>>>> ] 3766/3769, 5699.5 task/s, elapsed: 1s, ETA:     0s
[>>>>>>>>>>>>>>>>>>>>>>>>> ] 3767/3769, 5699.5 task/s, elapsed: 1s, ETA:     0s
[>>>>>>>>>>>>>>>>>>>>>>>>> ] 3768/3769, 5699.6 task/s, elapsed: 1s, ETA:     0s
[>>>>>>>>>>>>>>>>>>>>>>>>>>] 3769/3769, 5699.6 task/s, elapsed: 1s, ETA:     0s
Result is saved to /tmp/tmpj3mtxq1k/resultspts_bbox.pkl.
OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
2022-10-29 23:07:30,019 - mmdet - INFO - Results of pts_bbox:

----------- AP11 Results ------------

Pedestrian AP11@0.50, 0.50, 0.50:
bbox AP11:0.0000, 0.0000, 0.0000
bev  AP11:0.0000, 0.0000, 0.0000
3d   AP11:0.0000, 0.0000, 0.0000
Pedestrian AP11@0.50, 0.25, 0.25:
bbox AP11:0.0000, 0.0000, 0.0000
bev  AP11:0.0000, 0.0000, 0.0000
3d   AP11:0.0000, 0.0000, 0.0000
Cyclist AP11@0.50, 0.50, 0.50:
bbox AP11:0.0000, 0.0000, 0.0000
bev  AP11:0.0000, 0.0000, 0.0000
3d   AP11:0.0000, 0.0000, 0.0000
Cyclist AP11@0.50, 0.25, 0.25:
bbox AP11:0.0000, 0.0000, 0.0000
bev  AP11:0.0000, 0.0000, 0.0000
3d   AP11:0.0000, 0.0000, 0.0000
Car AP11@0.70, 0.70, 0.70:
bbox AP11:0.0000, 0.0000, 0.0000
bev  AP11:0.0000, 0.0000, 0.0000
3d   AP11:0.0000, 0.0000, 0.0000
Car AP11@0.70, 0.50, 0.50:
bbox AP11:0.0000, 0.0000, 0.0000
bev  AP11:0.0000, 0.0000, 0.0000
3d   AP11:0.0000, 0.0000, 0.0000

Overall AP11@easy, moderate, hard:
bbox AP11:0.0000, 0.0000, 0.0000
bev  AP11:0.0000, 0.0000, 0.0000
3d   AP11:0.0000, 0.0000, 0.0000

----------- AP40 Results ------------

Pedestrian AP40@0.50, 0.50, 0.50:
bbox AP40:0.0000, 0.0000, 0.0000
bev  AP40:0.0000, 0.0000, 0.0000
3d   AP40:0.0000, 0.0000, 0.0000
Pedestrian AP40@0.50, 0.25, 0.25:
bbox AP40:0.0000, 0.0000, 0.0000
bev  AP40:0.0000, 0.0000, 0.0000
3d   AP40:0.0000, 0.0000, 0.0000
Cyclist AP40@0.50, 0.50, 0.50:
bbox AP40:0.0000, 0.0000, 0.0000
bev  AP40:0.0000, 0.0000, 0.0000
3d   AP40:0.0000, 0.0000, 0.0000
Cyclist AP40@0.50, 0.25, 0.25:
bbox AP40:0.0000, 0.0000, 0.0000
bev  AP40:0.0000, 0.0000, 0.0000
3d   AP40:0.0000, 0.0000, 0.0000
Car AP40@0.70, 0.70, 0.70:
bbox AP40:0.0000, 0.0000, 0.0000
bev  AP40:0.0000, 0.0000, 0.0000
3d   AP40:0.0000, 0.0000, 0.0000
Car AP40@0.70, 0.50, 0.50:
bbox AP40:0.0000, 0.0000, 0.0000
bev  AP40:0.0000, 0.0000, 0.0000
3d   AP40:0.0000, 0.0000, 0.0000

Overall AP40@easy, moderate, hard:
bbox AP40:0.0000, 0.0000, 0.0000
bev  AP40:0.0000, 0.0000, 0.0000
3d   AP40:0.0000, 0.0000, 0.0000

2022-10-29 23:07:30,026 - mmdet - INFO - Epoch(val) [1][943]    pts_bbox/KITTI/Pedestrian_3D_AP11_easy_strict: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP11_easy_strict: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP11_easy_strict: 0.0000, pts_bbox/KITTI/Pedestrian_3D_AP11_moderate_strict: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP11_moderate_strict: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP11_moderate_strict: 0.0000, pts_bbox/KITTI/Pedestrian_3D_AP11_hard_strict: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP11_hard_strict: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP11_hard_strict: 0.0000, pts_bbox/KITTI/Pedestrian_3D_AP11_easy_loose: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP11_easy_loose: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP11_easy_loose: 0.0000, pts_bbox/KITTI/Pedestrian_3D_AP11_moderate_loose: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP11_moderate_loose: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP11_moderate_loose: 0.0000, pts_bbox/KITTI/Pedestrian_3D_AP11_hard_loose: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP11_hard_loose: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP11_hard_loose: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP11_easy_strict: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP11_easy_strict: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP11_easy_strict: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP11_moderate_strict: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP11_moderate_strict: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP11_moderate_strict: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP11_hard_strict: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP11_hard_strict: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP11_hard_strict: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP11_easy_loose: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP11_easy_loose: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP11_easy_loose: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP11_moderate_loose: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP11_moderate_loose: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP11_moderate_loose: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP11_hard_loose: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP11_hard_loose: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP11_hard_loose: 0.0000, pts_bbox/KITTI/Car_3D_AP11_easy_strict: 0.0000, pts_bbox/KITTI/Car_BEV_AP11_easy_strict: 0.0000, pts_bbox/KITTI/Car_2D_AP11_easy_strict: 0.0000, pts_bbox/KITTI/Car_3D_AP11_moderate_strict: 0.0000, pts_bbox/KITTI/Car_BEV_AP11_moderate_strict: 0.0000, pts_bbox/KITTI/Car_2D_AP11_moderate_strict: 0.0000, pts_bbox/KITTI/Car_3D_AP11_hard_strict: 0.0000, pts_bbox/KITTI/Car_BEV_AP11_hard_strict: 0.0000, pts_bbox/KITTI/Car_2D_AP11_hard_strict: 0.0000, pts_bbox/KITTI/Car_3D_AP11_easy_loose: 0.0000, pts_bbox/KITTI/Car_BEV_AP11_easy_loose: 0.0000, pts_bbox/KITTI/Car_2D_AP11_easy_loose: 0.0000, pts_bbox/KITTI/Car_3D_AP11_moderate_loose: 0.0000, pts_bbox/KITTI/Car_BEV_AP11_moderate_loose: 0.0000, pts_bbox/KITTI/Car_2D_AP11_moderate_loose: 0.0000, pts_bbox/KITTI/Car_3D_AP11_hard_loose: 0.0000, pts_bbox/KITTI/Car_BEV_AP11_hard_loose: 0.0000, pts_bbox/KITTI/Car_2D_AP11_hard_loose: 0.0000, pts_bbox/KITTI/Overall_3D_AP11_easy: 0.0000, pts_bbox/KITTI/Overall_BEV_AP11_easy: 0.0000, pts_bbox/KITTI/Overall_2D_AP11_easy: 0.0000, pts_bbox/KITTI/Overall_3D_AP11_moderate: 0.0000, pts_bbox/KITTI/Overall_BEV_AP11_moderate: 0.0000, pts_bbox/KITTI/Overall_2D_AP11_moderate: 0.0000, pts_bbox/KITTI/Overall_3D_AP11_hard: 0.0000, pts_bbox/KITTI/Overall_BEV_AP11_hard: 0.0000, pts_bbox/KITTI/Overall_2D_AP11_hard: 0.0000, pts_bbox/KITTI/Pedestrian_3D_AP40_easy_strict: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP40_easy_strict: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP40_easy_strict: 0.0000, pts_bbox/KITTI/Pedestrian_3D_AP40_moderate_strict: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP40_moderate_strict: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP40_moderate_strict: 0.0000, pts_bbox/KITTI/Pedestrian_3D_AP40_hard_strict: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP40_hard_strict: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP40_hard_strict: 0.0000, pts_bbox/KITTI/Pedestrian_3D_AP40_easy_loose: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP40_easy_loose: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP40_easy_loose: 0.0000, pts_bbox/KITTI/Pedestrian_3D_AP40_moderate_loose: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP40_moderate_loose: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP40_moderate_loose: 0.0000, pts_bbox/KITTI/Pedestrian_3D_AP40_hard_loose: 0.0000, pts_bbox/KITTI/Pedestrian_BEV_AP40_hard_loose: 0.0000, pts_bbox/KITTI/Pedestrian_2D_AP40_hard_loose: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP40_easy_strict: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP40_easy_strict: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP40_easy_strict: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP40_moderate_strict: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP40_moderate_strict: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP40_moderate_strict: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP40_hard_strict: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP40_hard_strict: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP40_hard_strict: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP40_easy_loose: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP40_easy_loose: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP40_easy_loose: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP40_moderate_loose: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP40_moderate_loose: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP40_moderate_loose: 0.0000, pts_bbox/KITTI/Cyclist_3D_AP40_hard_loose: 0.0000, pts_bbox/KITTI/Cyclist_BEV_AP40_hard_loose: 0.0000, pts_bbox/KITTI/Cyclist_2D_AP40_hard_loose: 0.0000, pts_bbox/KITTI/Car_3D_AP40_easy_strict: 0.0000, pts_bbox/KITTI/Car_BEV_AP40_easy_strict: 0.0000, pts_bbox/KITTI/Car_2D_AP40_easy_strict: 0.0000, pts_bbox/KITTI/Car_3D_AP40_moderate_strict: 0.0000, pts_bbox/KITTI/Car_BEV_AP40_moderate_strict: 0.0000, pts_bbox/KITTI/Car_2D_AP40_moderate_strict: 0.0000, pts_bbox/KITTI/Car_3D_AP40_hard_strict: 0.0000, pts_bbox/KITTI/Car_BEV_AP40_hard_strict: 0.0000, pts_bbox/KITTI/Car_2D_AP40_hard_strict: 0.0000, pts_bbox/KITTI/Car_3D_AP40_easy_loose: 0.0000, pts_bbox/KITTI/Car_BEV_AP40_easy_loose: 0.0000, pts_bbox/KITTI/Car_2D_AP40_easy_loose: 0.0000, pts_bbox/KITTI/Car_3D_AP40_moderate_loose: 0.0000, pts_bbox/KITTI/Car_BEV_AP40_moderate_loose: 0.0000, pts_bbox/KITTI/Car_2D_AP40_moderate_loose: 0.0000, pts_bbox/KITTI/Car_3D_AP40_hard_loose: 0.0000, pts_bbox/KITTI/Car_BEV_AP40_hard_loose: 0.0000, pts_bbox/KITTI/Car_2D_AP40_hard_loose: 0.0000, pts_bbox/KITTI/Overall_3D_AP40_easy: 0.0000, pts_bbox/KITTI/Overall_BEV_AP40_easy: 0.0000, pts_bbox/KITTI/Overall_2D_AP40_easy: 0.0000, pts_bbox/KITTI/Overall_3D_AP40_moderate: 0.0000, pts_bbox/KITTI/Overall_BEV_AP40_moderate: 0.0000, pts_bbox/KITTI/Overall_2D_AP40_moderate: 0.0000, pts_bbox/KITTI/Overall_3D_AP40_hard: 0.0000, pts_bbox/KITTI/Overall_BEV_AP40_hard: 0.0000, pts_bbox/KITTI/Overall_2D_AP40_hard: 0.0000
2022-10-29 23:07:32,242 - mmdet - INFO - Saving checkpoint at 2 epochs
[                                                  ] 0/3769, elapsed: 0s, ETA:/home/hhz/code/detection/mmdetection3d/mmdet3d/models/fusion_layers/coord_transform.py:34: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  torch.tensor(img_meta['pcd_rotation'], dtype=dtype, device=device)

Additional information

I used KITTI for the training. It seems a problem with mmcv. @lindahua @happynear @aditya9710 I need your help. Thanks

VVsssssk commented 1 year ago

Hi, It seems like a Nane loss error, can you check your log to see whether you get Nane loss in training? And if you used 4 GPU, I suggest you can turn down the learning rate in training.

hellohaozheng commented 1 year ago

Ok, I'll try turning down the learning rate. But I don't understand what you mean Nane loss. Can you explain it in details? Thanks! @VVsssssk

hellohaozheng commented 1 year ago

Hello, I found another issue reporting the same problem. There may be some problems with the original mvx-net model. @VVsssssk @lindahua @atinfinity @mickeyouyou image

VVsssssk commented 1 year ago

Yeah, when I trained MVXNet find some problems too, sometimes model loss is Nane or raises OOM error, so I think maybe it's unstable? And then I turned down the model's learning rate to get a relatively normal result.

JingweiZhang12 commented 1 year ago

@hellohaozheng Hi, we have fixed this bug in the PR https://github.com/open-mmlab/mmdetection3d/pull/2282

CrushDory commented 1 year ago

你好,你解决了吗,我也遇到相同的问题,可视化出来的检测框也是完全偏离物体的

hellohaozheng commented 1 year ago

你好,你解决了吗,我也遇到相同的问题,可视化出来的检测框也是完全偏离物体的

I remember I fixed this problem by adjusting the coordinate system orientation, you can try it

CrushDory commented 1 year ago

I think so, can u share the code which you changed about it