Thanks for your great contribution! I tried to employ DETR3D on KITTI format dataset.
My dataset was gathered by one camera and one LiDAR.
I have converted my customized dataset into KITTI format as follows (This is an example of calib file)
Traceback (most recent call last):
File "tools/train.py", line 248, in <module>
main()
File "tools/train.py", line 244, in main
meta=meta)
File "/data/run01/scz3687/openmmlab0171/mmdetection3d/mmdet3d/apis/train.py", line 35, in train_model
meta=meta)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmdet/apis/train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
**kwargs)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmdet/models/detectors/base.py", line 237, in train_step
losses = self(**data)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/data/run01/scz3687/openmmlab0171/mmdetection3d/mmdet3d/models/detectors/base.py", line 59, in forward
return self.forward_train(**kwargs)
File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/detectors/futr3d.py", line 200, in forward_train
gt_bboxes_ignore)
File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/detectors/futr3d.py", line 135, in forward_mdfs_train
outs = self.pts_bbox_head(pts_feats, img_feats, rad_feats, img_metas)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/dense_head/detr_mdfs_head.py", line 130, in forward
img_metas=img_metas,
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/utils/transformer.py", line 157, in forward
**kwargs)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/utils/transformer.py", line 215, in forward
**kwargs)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/mmcv/cnn/bricks/transformer.py", line 508, in forward
**kwargs)
File "/HOME/scz3687/.conda/envs/open-mmlab0171/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/utils/attention.py", line 241, in forward
img_feats, reference_points, self.pc_range, kwargs['img_metas'])
File "/data/run01/scz3687/openmmlab0171/mmdetection3d/plugin/futr3d/models/utils/attention.py", line 344, in feature_sampling
lidar2img = lidar2img.view(B, num_cam, 1, 4, 4).repeat(1, 1, num_query, 1, 1)
RuntimeError: shape '[1, 4, 1, 4, 4]' is invalid for input of size 16
How could I change the lidar2img.size(1) from 4 to 1. I need some help please.
Thanks for your great contribution! I tried to employ DETR3D on KITTI format dataset. My dataset was gathered by one camera and one LiDAR. I have converted my customized dataset into KITTI format as follows (This is an example of calib file)
But I don't know how to solve the following error
How could I change the lidar2img.size(1) from 4 to 1. I need some help please.