Closed thatnn closed 1 year ago
According your config, you use the wrong sparse_shape: [400, 300, 32]
and INPUT_SHAPE: [468, 468, 1]
.
According your config, you use the wrong
sparse_shape: [400, 300, 32]
andINPUT_SHAPE: [468, 468, 1]
.
Thank you for your reply
What are the values fit on Sparse_shape and Input_shape ?
is it realated to point cloud range?
Thank you
According your config, you use the wrong
sparse_shape: [400, 300, 32]
andINPUT_SHAPE: [468, 468, 1]
.Thank you for your reply
What are the values fit on Sparse_shape and Input_shape ?
is it realated to point cloud range?
Thank you
POINT_CLOUD_RANGE, VOXEL_SIZE and downsample_stride.
According your config, you use the wrong
sparse_shape: [400, 300, 32]
andINPUT_SHAPE: [468, 468, 1]
.Thank you for your reply What are the values fit on Sparse_shape and Input_shape ? is it realated to point cloud range? Thank you
POINT_CLOUD_RANGE, VOXEL_SIZE and downsample_stride.
Can u suggest some values?
Thanks
The downsample_stride should be carefully considered and is relative to INPUT_SHAPE
in MAP_TO_BEV. I recommend you read the origin code.
According your config, you use the wrong
sparse_shape: [400, 300, 32]
andINPUT_SHAPE: [468, 468, 1]
.Thank you for your reply What are the values fit on Sparse_shape and Input_shape ? is it realated to point cloud range? Thank you
POINT_CLOUD_RANGE, VOXEL_SIZE and downsample_stride.
Can u suggest some values?
Thanks
By the way, if you use the voxel_size
of [ 0.4, 0.4, 0.1875 ] and POINT_CLOUD_RANGE
of [0, -39.68, -3, 69.12, 39.68, 1], the sparse_shape
should be [173, 199,22].
According your config, you use the wrong
sparse_shape: [400, 300, 32]
andINPUT_SHAPE: [468, 468, 1]
.Thank you for your reply What are the values fit on Sparse_shape and Input_shape ? is it realated to point cloud range? Thank you
POINT_CLOUD_RANGE, VOXEL_SIZE and downsample_stride.
Can u suggest some values? Thanks
By the way, if you use the
voxel_size
of [ 0.4, 0.4, 0.1875 ] andPOINT_CLOUD_RANGE
of [0, -39.68, -3, 69.12, 39.68, 1], thesparse_shape
should be [173, 199,22].
@thatnn Have you trained successfully after using the above parameters?
First thank u for your amazing work
I want to train and test with kitti format data.
So i modify some parameter but it doesn't work
error is below
Traceback (most recent call last): File "train.py", line 228, in
main()
File "train.py", line 172, in main
train_model(
File "/home/user/DSVT/tools/train_utils/train_utils.py", line 224, in train_model
accumulated_iter = train_one_epoch(
File "/home/user/DSVT/tools/train_utils/train_utils.py", line 75, in train_one_epoch
loss, tb_dict, disp_dict = model_func(model, batch)
File "/home/user/DSVT/tools/../pcdet/models/init.py", line 42, in model_func
ret_dict, tb_dict, disp_dict = model(batch_dict)
File "/home/user/anaconda3/envs/openpcdet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, *kwargs)
File "/home/user/DSVT/tools/../pcdet/models/detectors/centerpoint.py", line 14, in forward
loss, tb_dict, disp_dict = self.get_training_loss()
File "/home/user/DSVT/tools/../pcdet/models/detectors/centerpoint.py", line 27, in get_training_loss
loss_rpn, tb_dict = self.dense_head.get_loss()
File "/home/user/DSVT/tools/../pcdet/models/dense_heads/center_head.py", line 258, in get_loss
hm_loss = self.hm_loss_func(pred_dict['hm'], target_dicts['heatmaps'][idx])
File "/home/user/anaconda3/envs/openpcdet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(input, *kwargs)
File "/home/user/DSVT/tools/../pcdet/utils/loss_utils.py", line 312, in forward
return self.neg_loss(out, target, mask=mask)
File "/home/user/DSVT/tools/../pcdet/utils/loss_utils.py", line 282, in neg_loss_cornernet
pos_loss = torch.log(pred) torch.pow(1 - pred, 2) * pos_inds
RuntimeError: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 0
and here is my yaml file
Can u help me or provide some yaml file to train kitti dataset?
I'm waiting for yor reply
Thank you!!