open-mmlab / mmdetection3d

OpenMMLab's next-generation platform for general 3D object detection.
https://mmdetection3d.readthedocs.io/en/latest/
Apache License 2.0
5.2k stars 1.53k forks source link

[Bug] #2584

Open ahnz7 opened 1 year ago

ahnz7 commented 1 year ago

Prerequisite

Task

I'm using the official example scripts/configs for the officially supported tasks/models/datasets.

Branch

1.x branch https://github.com/open-mmlab/mmdetection3d/tree/dev-1.x

Environment

sys.platform: linux Python: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0: NVIDIA GeForce RTX 3080 CUDA_HOME: /usr/local/cuda-11.6 NVCC: Cuda compilation tools, release 11.6, V11.6.124 GCC: gcc (Ubuntu 9.5.0-3ubuntu1) 9.5.0 PyTorch: 1.10.0+cu113 PyTorch compiling details: PyTorch built with:

TorchVision: 0.11.0+cu113 OpenCV: 4.7.0 MMEngine: 0.7.3 MMDetection: 3.0.0 MMDetection3D: 1.1.1+3fa0e8b spconv2.0: True

Reproduces the problem - code sample

in 'Additional information'

Reproduces the problem - command or script

python demo/multi_modality_demo.py demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__LIDAR_TOP__1532402927647951.pcd.bin demo/data/nuscenes/ demo/data/nuscenes/n015-2018-07-24-11-22-45+0800.pkl projects/BEVFusion/configs/bevfusion_lidar-cam_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d.py /home/projects/deepsort_co/mmdetection3d/work_dirs/bevfusion_lidar-cam_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d/epoch_1.pth --cam-type all --score-thr 0.2 --show

Reproduces the problem - error message

size mismatch for pts_middle_encoder.conv_input.0.weight: copying a param with shape ('pts_middle_encoder.conv_input.0.weight', torch.Size([5, 16, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([16, 3, 3, 3, 5]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer1.0.conv1.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer1.0.conv1.weight', torch.Size([16, 16, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer1.0.conv2.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer1.0.conv2.weight', torch.Size([16, 16, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer1.1.conv1.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer1.1.conv1.weight', torch.Size([16, 16, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer1.1.conv2.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer1.1.conv2.weight', torch.Size([16, 16, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer1.2.0.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer1.2.0.weight', torch.Size([16, 32, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([32, 3, 3, 3, 16]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer2.0.conv1.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer2.0.conv1.weight', torch.Size([32, 32, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer2.0.conv2.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer2.0.conv2.weight', torch.Size([32, 32, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer2.1.conv1.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer2.1.conv1.weight', torch.Size([32, 32, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer2.1.conv2.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer2.1.conv2.weight', torch.Size([32, 32, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer2.2.0.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer2.2.0.weight', torch.Size([32, 64, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([64, 3, 3, 3, 32]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer3.0.conv1.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer3.0.conv1.weight', torch.Size([64, 64, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer3.0.conv2.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer3.0.conv2.weight', torch.Size([64, 64, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer3.1.conv1.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer3.1.conv1.weight', torch.Size([64, 64, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer3.1.conv2.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer3.1.conv2.weight', torch.Size([64, 64, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer3.2.0.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer3.2.0.weight', torch.Size([64, 128, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([128, 3, 3, 3, 64]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer4.0.conv1.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer4.0.conv1.weight', torch.Size([128, 128, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([128, 3, 3, 3, 128]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer4.0.conv2.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer4.0.conv2.weight', torch.Size([128, 128, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([128, 3, 3, 3, 128]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer4.1.conv1.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer4.1.conv1.weight', torch.Size([128, 128, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([128, 3, 3, 3, 128]). size mismatch for pts_middle_encoder.encoder_layers.encoder_layer4.1.conv2.weight: copying a param with shape ('pts_middle_encoder.encoder_layers.encoder_layer4.1.conv2.weight', torch.Size([128, 128, 3, 3, 3])) from checkpoint,the shape in current model is torch.Size([128, 3, 3, 3, 128]). size mismatch for pts_middle_encoder.conv_out.0.weight: copying a param with shape ('pts_middle_encoder.conv_out.0.weight', torch.Size([128, 128, 1, 1, 3])) from checkpoint,the shape in current model is torch.Size([128, 1, 1, 3, 128]).

Additional information

I've downloaded your pre-trained model from ''https://download.openmmlab.com/mmdetection3d/v1.1.0_models/bevfusion/bevfusion_lidar_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d-2628f933.pth'' and used it to train my local model with this command "bash tools/dist_train.sh projects/BEVFusion/configs/bevfusion_lidar-cam_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d.py 1 --cfg-options load_from=/home/projects/mmdetection3d/pretrained/bevfusion_lidar_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d-2628f933.pth model.img_backbone.init_cfg.checkpoint=/home/projects/mmdetection3d/pretrained/swint-nuimages-pretrained.pth". However, I encountered an issue when I tried to visualize the results after training.

JingweiZhang12 commented 1 year ago

@ahnz7 Hi, I guess your program steped into this line: https://github.com/open-mmlab/mmdetection3d/blob/main/mmdet3d/models/layers/spconv/overwrite_spconv/write_spconv2.py#L72. Is it normal when you ran pcd_demo.py only with lidar detector using the lidar-modality checkpoint we provided?

BoomSky0416 commented 1 year ago

@ahnz7 Hello, try this transposition to fix it.

import torch

path = 'bevfusion_lidar_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d-2628f933.pth'
model = torch.load(path)

for key in model['state_dict'].keys():
    if (key == 'pts_middle_encoder.conv_input.0.weight') or (key == 'pts_middle_encoder.conv_out.0.weight') or (key == 'pts_middle_encoder.encoder_layers.encoder_layer3.2.0.weight') or (key == 'pts_middle_encoder.encoder_layers.encoder_layer2.2.0.weight') or (key == 'pts_middle_encoder.encoder_layers.encoder_layer1.2.0.weight') or (('pts_middle_encoder.encoder_layers.encoder_layer' in key) and ('conv' in key)):
        model['state_dict'][key] = torch.transpose(model['state_dict'][key],0,1)
        model['state_dict'][key] = torch.transpose(model['state_dict'][key],1,2)
        model['state_dict'][key] = torch.transpose(model['state_dict'][key],2,3)
        model['state_dict'][key] = torch.transpose(model['state_dict'][key],3,4)

torch.save(model, "official_convert_epoch_20.pth")
ahnz7 commented 1 year ago

@ahnz7 Hi, I guess your program steped into this line: https://github.com/open-mmlab/mmdetection3d/blob/main/mmdet3d/models/layers/spconv/overwrite_spconv/write_spconv2.py#L72. Is it normal when you ran pcd_demo.py only with lidar detector using the lidar-modality checkpoint we provided?

Thanks for your reply. When I ran pcd_demo.py only with lidar detector using the lidar-modality checkpoint from yours, it works. But it looks stranger. Then I have noticed the tipps from @BoomSky0416, I have tried it, the lidar looks good. Thank you very much for your help, but I have one more question. I used GTX3090 for training, but I can set the batch size maximal 2 not 4, could you known why? Or I got some mistakes?(I have already used --amp in command)

Hardy-Uint commented 1 year ago

@ahnz7 Hello, try this transposition to fix it.

import torch

path = 'bevfusion_lidar_voxel0075_second_secfpn_8xb4-cyclic-20e_nus-3d-2628f933.pth'
model = torch.load(path)

for key in model['state_dict'].keys():
    if (key == 'pts_middle_encoder.conv_input.0.weight') or (key == 'pts_middle_encoder.conv_out.0.weight') or (key == 'pts_middle_encoder.encoder_layers.encoder_layer3.2.0.weight') or (key == 'pts_middle_encoder.encoder_layers.encoder_layer2.2.0.weight') or (key == 'pts_middle_encoder.encoder_layers.encoder_layer1.2.0.weight') or (('pts_middle_encoder.encoder_layers.encoder_layer' in key) and ('conv' in key)):
        model['state_dict'][key] = torch.transpose(model['state_dict'][key],0,1)
        model['state_dict'][key] = torch.transpose(model['state_dict'][key],1,2)
        model['state_dict'][key] = torch.transpose(model['state_dict'][key],2,3)
        model['state_dict'][key] = torch.transpose(model['state_dict'][key],3,4)

torch.save(model, "official_convert_epoch_20.pth")

yes,it works

JingweiZhang12 commented 1 year ago

@Hardy-Uint @ahnz7 Hi, thanks for your feedback. I could not reproduce your bugs. Are you sure you use spconv>=2.0 rather than spconv in mmcv? We'll appreciate it if you could provide more details to facilitate our bug-fix.

ahnz7 commented 1 year ago

@Hardy-Uint @ahnz7 Hi, thanks for your feedback. I could not reproduce your bugs. Are you sure you use spconv>=2.0 rather than spconv in mmcv? We'll appreciate it if you could provide more details to facilitate our bug-fix. image I think my spconv>=2.0? in above image is showed. @BoomSky0416 He has already given us a method to solve this problem. So I think he met it too?

Hardy-Uint commented 1 year ago

ahnz7 yes, i do use the spconv-cu113==2.3.6

WuYanXingege commented 9 months ago

@Hardy-Uint @ahnz7 Hi, thanks for your feedback. I could not reproduce your bugs. Are you sure you use spconv>=2.0 rather than spconv in mmcv? We'll appreciate it if you could provide more details to facilitate our bug-fix.

In the BevFusion radar branch, the original code uses a point cloud range of [-54, -54, -5, 54, 54, 3], voxel size of [0.075, 0.075, 0.2], and sparse shape of [1440, 1440, 41]. After modifying the point cloud range to [0, -30, -3, 60, 30, 1], voxel size to [0.05, 0.05, 0.1], and sparse shape to [1200, 1200, 41], you encountered an error during training, while the original code trains successfully. cuda error:invalid configuration argument