Closed luoxiaoliaolan closed 1 year ago
maybe you should change your "POINT_CLOUD_RANGE".
maybe you should change your "POINT_CLOUD_RANGE". @xuchangjia I tried changing "POINT_CLOUD_RANGE", but still getting that error. I debugged the program,and found that the reason for the misalignment of the number of channels is mainly here:
/OpenPCDet/pcdet/models/backbones_2d/map_to_bev/height_compression.py:
The values of these dimensions are obtained through the processing of VoxelBackBone8x (/OpenPCDet/pcdet/models/backbones_3d/spconv_backbone.py)
128 * 3 = 384
pv_rcnn_plusplus.yaml:
MAP_TO_BEV: NAME: HeightCompression NUM_BEV_FEATURES: 256
384 != 256
D=3 not 2
How should I adjust this configuration to avoid this error?
If it is convenient, I hope to communicate through WeChat, my WeChat ID: lyb543918165
Did you debug successfully?
Did you debug successfully?
Yes, I adjusted the configuration of the training model and it can be trained normally
Did you debug successfully?
Yes, I adjusted the configuration of the training model and it can be trained normally
Thanks, I would like to add you WeChat to ask some questions.
maybe you should change your "POINT_CLOUD_RANGE".
Can you please explain how the point cloud range relates to the number of features? I'm having a similar problem and would appreciate any help.
Edit: https://github.com/open-mmlab/OpenPCDet/issues/253#issuecomment-679190936 helped
@BraunBenni
The range of the point cloud is related to the generated voxels. You should set the range of the point cloud according to your data, such as size parameters. Maybe you need to try several times.
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
Hi for this point cloud range
POINT_CLOUD_RANGE: [-32.0, -32.0, -3.0, 32.0, 32.0, 40.0]
How do I choose the voxel size ?
- NAME: transform_points_to_voxels
VOXEL_SIZE: [0.16, 0.16, 4]
MAX_POINTS_PER_VOXEL: 50
MAX_NUMBER_OF_VOXELS: {
'train': 150000,
'test': 150000
}
No matter what I do I get the error :
ValueError: your out spatial shape [0, X, X] reach zero!!! input shape: [1, X, X]
Hi! I want to try to use PV-RCNN++ to train my own data. My data is basically organized in the format of the KITTI dataset. The dimensions of the point cloud include (x, y, z, intensity), which appear during the training process. this error:
Traceback (most recent call last): | 0/60545 [00:00<?, ?it/s] File "train.py", line 222, in <module> main() File "train.py", line 168, in main train_model( File "/mnt/NAS/liuyb/OpenPCDet/tools/train_utils/train_utils.py", line 150, in train_model accumulated_iter = train_one_epoch( File "/mnt/NAS/liuyb/OpenPCDet/tools/train_utils/train_utils.py", line 52, in train_one_epoch loss, tb_dict, disp_dict = model_func(model, batch) File "/mnt/NAS/liuyb/OpenPCDet/pcdet/models/__init__.py", line 42, in model_func ret_dict, tb_dict, disp_dict = model(batch_dict) File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/mnt/NAS/liuyb/OpenPCDet/pcdet/models/detectors/pv_rcnn_plusplus.py", line 13, in forward batch_dict = self.backbone_2d(batch_dict) File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/mnt/NAS/liuyb/OpenPCDet/pcdet/models/backbones_2d/base_bev_backbone.py", line 93, in forward x = self.blocks[i](x) File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward input = module(input) File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 446, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 442, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [128, 256, 3, 3], expected input[4, 384, 152, 377] to have 256 channels, but got 384 channels instead
I don't know whether I should adjust the configuration of the training model or the process of data processing, please take a look at this issue and eagerly wait for help
Attach pv_rcnn_plusplus.yaml: `CLASS_NAMES: ['car', 'pedestrian', 'bicycle', 'tricycle', 'cyclist', 'motorcyclist', 'tricyclist', 'van', 'bus', 'truck', 'mini_truck', 'special_vehicle', 'traffic_cone', 'small_movable', 'small_unmovable', 'crash_barrel', 'construction_sign', 'noise', 'water_horse', 'other']
DATA_CONFIG: _BASECONFIG: cfgs/dataset_configs/as_dataset.yaml OUTPUT_PATH: '/mnt/NAS/liuyb/OpenPCDet/model'
MODEL: NAME: PVRCNNPlusPlus
NUM_KEYPOINTS: 4096
NUM_OUTPUT_FEATURES: 90
NUM_REDUCED_CHANNELS: 2
NMS_PRE_MAXSIZE: 4096
NMS_POST_MAXSIZE: 500
NMS_THRESH: 0.85
NUM_REDUCED_CHANNELS: 30
OPTIMIZATION: BATCH_SIZE_PER_GPU: 2 NUM_EPOCHS: 30