Open xiaomonv111 opened 1 year ago
Suggest using the dev-1.x
branch and it's the latest code. The demo in dev-1.x branch has been verified.
Sorry, I feel that there is still a problem with dev1.x. It is the same problem. Is there any other correction plan? Can I ask if checkpoint_file is to download hv_second_secfpn_6x8_80e_kitti-3d-3class_20210831_022017-ae782e87.pth' on github
The checkpoint link of SECOND has been updated in https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/configs/second/README.md
The PALETTE
and CLASSES
should be in lower case format and these keys are under model.dataset_meta
. https://github.com/open-mmlab/mmdetection3d/blob/dev-1.x/mmdet3d/apis/inference.py#L82
very sorry,it still have mistake.
AttributeError Traceback (most recent call last)
/tmp/ipykernel_23136/390537073.py in
~/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py in getattr(self, name) 1268 return modules[name] 1269 raise AttributeError("'{}' object has no attribute '{}'".format( -> 1270 type(self).name, name)) 1271 1272 def setattr(self, name: str, value: Union[Tensor, 'Module']) -> None:
AttributeError: 'VoxelNet' object has no attribute 'classes'
Very thank you
@xiaomonv111 Hi, you could refer to this demo example to customize your demo.
Run at dev-1.x
in inference_demo.ipynb
@JingweiZhang12
# test a single sample
pcd = './data/kitti/000008.bin'
result, data = inference_detector(model, pcd)
points = data['inputs']['points']
data_input = dict(points=points)
Error:
--------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[/workspaces/smartGate/trainers/mmdetection3d/demo/inference_demo.ipynb](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/workspaces/smartGate/trainers/mmdetection3d/demo/inference_demo.ipynb) Cell 8 line 3
[1](vscode-notebook-cell://dev-container%2B7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d/workspaces/smartGate/trainers/mmdetection3d/demo/inference_demo.ipynb#X24sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0) # test a single sample
[2](vscode-notebook-cell://dev-container%2B7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d/workspaces/smartGate/trainers/mmdetection3d/demo/inference_demo.ipynb#X24sdnNjb2RlLXJlbW90ZQ%3D%3D?line=1) pcd = '[./data/kitti/000008.bin](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/workspaces/smartGate/trainers/mmdetection3d/demo/data/kitti/000008.bin)'
----> [3](vscode-notebook-cell://dev-container%2B7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d/workspaces/smartGate/trainers/mmdetection3d/demo/inference_demo.ipynb#X24sdnNjb2RlLXJlbW90ZQ%3D%3D?line=2) result, data = inference_detector(model, pcd)
[4](vscode-notebook-cell://dev-container%2B7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d/workspaces/smartGate/trainers/mmdetection3d/demo/inference_demo.ipynb#X24sdnNjb2RlLXJlbW90ZQ%3D%3D?line=3) points = data['inputs']['points']
[5](vscode-notebook-cell://dev-container%2B7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d/workspaces/smartGate/trainers/mmdetection3d/demo/inference_demo.ipynb#X24sdnNjb2RlLXJlbW90ZQ%3D%3D?line=4) data_input = dict(points=points)
File [~/.local/lib/python3.10/site-packages/mmdet3d/apis/inference.py:182](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/workspaces/smartGate/trainers/mmdetection3d/demo/~/.local/lib/python3.10/site-packages/mmdet3d/apis/inference.py:182), in inference_detector(model, pcds)
180 # forward the model
181 with torch.no_grad():
--> 182 results = model.test_step(collate_data)
184 if not is_batch:
185 return results[0], data[0]
File [/usr/local/lib/python3.10/dist-packages/mmengine/model/base_model/base_model.py:145](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/mmengine/model/base_model/base_model.py:145), in BaseModel.test_step(self, data)
136 """``BaseModel`` implements ``test_step`` the same as ``val_step``.
137
138 Args:
(...)
142 list: The predictions of given data.
143 """
144 data = self.data_preprocessor(data, False)
--> 145 return self._run_forward(data, mode='predict')
File [/usr/local/lib/python3.10/dist-packages/mmengine/model/base_model/base_model.py:340](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/mmengine/model/base_model/base_model.py:340), in BaseModel._run_forward(self, data, mode)
330 """Unpacks data for :meth:`forward`
331
332 Args:
(...)
337 dict or list: Results of training or testing mode.
338 """
339 if isinstance(data, dict):
--> 340 results = self(**data, mode=mode)
341 elif isinstance(data, (list, tuple)):
342 results = self(*data, mode=mode)
File [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File [~/.local/lib/python3.10/site-packages/mmdet3d/models/detectors/base.py:86](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/workspaces/smartGate/trainers/mmdetection3d/demo/~/.local/lib/python3.10/site-packages/mmdet3d/models/detectors/base.py:86), in Base3DDetector.forward(self, inputs, data_samples, mode, **kwargs)
84 return self.aug_test(inputs, data_samples, **kwargs)
85 else:
---> 86 return self.predict(inputs, data_samples, **kwargs)
87 elif mode == 'tensor':
88 return self._forward(inputs, data_samples, **kwargs)
File [~/.local/lib/python3.10/site-packages/mmdet3d/models/detectors/single_stage.py:109](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/workspaces/smartGate/trainers/mmdetection3d/demo/~/.local/lib/python3.10/site-packages/mmdet3d/models/detectors/single_stage.py:109), in SingleStage3DDetector.predict(self, batch_inputs_dict, batch_data_samples, **kwargs)
78 def predict(self, batch_inputs_dict: dict, batch_data_samples: SampleList,
79 **kwargs) -> SampleList:
80 """Predict results from a batch of inputs and data samples with post-
81 processing.
82
(...)
107 (num_instances, C) where C >=7.
108 """
--> 109 x = self.extract_feat(batch_inputs_dict)
110 results_list = self.bbox_head.predict(x, batch_data_samples, **kwargs)
111 predictions = self.add_pred_to_datasample(batch_data_samples,
112 results_list)
File [~/.local/lib/python3.10/site-packages/mmdet3d/models/detectors/voxelnet.py:43](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/workspaces/smartGate/trainers/mmdetection3d/demo/~/.local/lib/python3.10/site-packages/mmdet3d/models/detectors/voxelnet.py:43), in VoxelNet.extract_feat(self, batch_inputs_dict)
39 voxel_features = self.voxel_encoder(voxel_dict['voxels'],
40 voxel_dict['num_points'],
41 voxel_dict['coors'])
42 batch_size = voxel_dict['coors'][-1, 0].item() + 1
---> 43 x = self.middle_encoder(voxel_features, voxel_dict['coors'],
44 batch_size)
45 x = self.backbone(x)
46 if self.with_neck:
File [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File [/usr/lib/python3.10/contextlib.py:79](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/usr/lib/python3.10/contextlib.py:79), in ContextDecorator.__call__.<locals>.inner(*args, **kwds)
76 @wraps(func)
77 def inner(*args, **kwds):
78 with self._recreate_cm():
---> 79 return func(*args, **kwds)
File [~/.local/lib/python3.10/site-packages/mmdet3d/models/middle_encoders/sparse_encoder.py:145](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/workspaces/smartGate/trainers/mmdetection3d/demo/~/.local/lib/python3.10/site-packages/mmdet3d/models/middle_encoders/sparse_encoder.py:145), in SparseEncoder.forward(self, voxel_features, coors, batch_size)
142 coors = coors.int()
143 input_sp_tensor = SparseConvTensor(voxel_features, coors,
144 self.sparse_shape, batch_size)
--> 145 x = self.conv_input(input_sp_tensor)
147 encode_features = []
148 for encoder_layer in self.encoder_layers:
File [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File [/usr/local/lib/python3.10/dist-packages/mmcv/ops/sparse_modules.py:135](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/mmcv/ops/sparse_modules.py:135), in SparseSequential.forward(self, input)
133 assert isinstance(input, SparseConvTensor)
134 self._sparity_dict[k] = input.sparity
--> 135 input = module(input)
136 else:
137 if isinstance(input, SparseConvTensor):
File [/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File [/usr/local/lib/python3.10/dist-packages/mmcv/ops/sparse_conv.py:157](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/mmcv/ops/sparse_conv.py:157), in SparseConvolution.forward(self, input)
155 outids, _, indice_pairs, indice_pair_num, _ = data
156 else:
--> 157 outids, indice_pairs, indice_pair_num = ops.get_indice_pairs(
158 indices,
159 batch_size,
160 spatial_shape,
161 self.kernel_size,
162 self.stride,
163 self.padding,
164 self.dilation,
165 self.output_padding,
166 self.subm,
167 self.transposed,
168 grid=input.grid)
169 input.indice_dict[self.indice_key] = (outids, indices,
170 indice_pairs,
171 indice_pair_num,
172 spatial_shape)
173 if self.fused_bn:
File [/usr/local/lib/python3.10/dist-packages/mmcv/ops/sparse_ops.py:99](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/mmcv/ops/sparse_ops.py:99), in get_indice_pairs(indices, batch_size, spatial_shape, ksize, stride, padding, dilation, out_padding, subm, transpose, grid)
97 else:
98 raise NotImplementedError
---> 99 return get_indice_pairs_func(indices, batch_size, out_shape,
100 spatial_shape, ksize, stride, padding,
101 dilation, out_padding, int(subm),
102 int(transpose))
103 else:
104 if ndim == 2:
RuntimeError: [/tmp/mmcv/mmcv/ops/csrc/pytorch/cuda/sparse_indice.cu](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f686f6d652f7562756e74752f776f726b2f736d61727447617465222c2273657474696e6773223a7b22686f7374223a227373683a2f2f3139322e3136382e332e313737227d7d.vscode-resource.vscode-cdn.net/tmp/mmcv/mmcv/ops/csrc/pytorch/cuda/sparse_indice.cu) 126
cuda execution failed with error 2
@xiaomonv111 Same error. How could you solve it?
no
This framework is too difficult to get started.
from mmdet3d.apis import inference_detector, init_model from mmdet3d.registry import VISUALIZERS from mmdet3d.utils import register_all_modules config_file = '/home/dxy/deepl/mmdetection3d/configs/second/second_hv_secfpn_8xb6-80e_kitti-3d-3class.py' checkpoint='/home/dxy/deepl/mmdetection3d/checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-3class_20210831_022017-ae782e87.pth' model = init_model(config_file, checkpoint, device='cuda:0') visualizer = VISUALIZERS.build(model.cfg.visualizer) visualizer = VISUALIZERS.build(model.cfg.visualizer) visualizer.dataset_meta = { 'CLASSES': model.CLASSES, 'PALETTE': model.PALETTE }
mistake:
AttributeError Traceback (most recent call last) /tmp/ipykernel_23136/1014200392.py in
2 visualizer = VISUALIZERS.build(model.cfg.visualizer)
3 visualizer.dataset_meta = {
----> 4 'CLASSES': model.CLASSES,
5 'PALETTE': model.PALETTE
6 }
~/miniconda3/envs/openmmlab/lib/python3.7/site-packages/torch/nn/modules/module.py in getattr(self, name)
1268 return modules[name]
1269 raise AttributeError("'{}' object has no attribute '{}'".format(
-> 1270 type(self).name, name))
1271
1272 def setattr(self, name: str, value: Union[Tensor, 'Module']) -> None:
AttributeError: 'VoxelNet' object has no attribute 'CLASSES'
Why does this demo have mistake? I choose the second. Thank you.