Open IdrissGo opened 1 year ago
If you modify the voxel_size
, you need to modify the output_shape
correspondingly.
@IdrissGo , did you change output_shape and trained a model. I have modified point_cloud_range, voxel_size, and out_size but i was getting the below error
Anything else that i need to modify?
FYI, i have modified point_cloud_range, voxel_size, output_shape in pts_middle_encoder, grid_shape in train_cfg solved the issue and am able to train the model.
Prerequisite
Task
I have modified the scripts/configs, or I'm working on my own tasks/models/datasets.
Branch
main branch https://github.com/open-mmlab/mmdetection3d
Environment
sys.platform: linux Python: 3.8.16 | packaged by conda-forge | (default, Feb 1 2023, 16:01:55) [GCC 11.3.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0: NVIDIA A100-SXM4-40GB CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.7, V11.7.99 GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 PyTorch: 2.0.1 PyTorch compiling details: PyTorch built with:
TorchVision: 0.15.2 OpenCV: 4.7.0 MMEngine: 0.7.4 MMDetection: 3.0.0 MMDetection3D: 1.1.0+fa724b1 spconv2.0: True
Reproduces the problem - code sample
In the base model file for pointpillars, just modified the voxel size entry (first line)
voxel_size = [0.05, 0.05, 4]
Reproduces the problem - command or script
python tools/train.py configs/pointpillars/pointpillars_hv_secfpn_8xb6-160e_kitti-3d-3class.py
Reproduces the problem - error message
A bunch of these : /opt/conda/conda-bld/pytorch_1682343962757/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [446,0,0], thread: [113,0,0] Assertion
index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed. /opt/conda/conda-bld/pytorch_1682343962757/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [446,0,0], thread: [114,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed. /opt/conda/conda-bld/pytorch_1682343962757/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [446,0,0], thread: [115,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed. /opt/conda/conda-bld/pytorch_1682343962757/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [446,0,0], thread: [116,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed. /opt/conda/conda-bld/pytorch_1682343962757/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [446,0,0], thread: [117,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed. /opt/conda/conda-bld/pytorch_1682343962757/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [446,0,0], thread: [118,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed. /opt/conda/conda-bld/pytorch_1682343962757/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [446,0,0], thread: [119,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed. /opt/conda/conda-bld/pytorch_1682343962757/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [446,0,0], thread: [120,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed. /opt/conda/conda-bld/pytorch_1682343962757/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [446,0,0], thread: [121,0,0] Assertionindex >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed. /opt/conda/conda-bld/pytorch_1682343962757/work/aten/src/ATen/native/cuda/IndexKernel.cAnd : Traceback (most recent call last): File "tools/train.py", line 135, in
main()
File "tools/train.py", line 131, in main
runner.train()
File "/nfs/home/gouidr/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1721, in train
model = self.train_loop.run() # type: ignore
File "/nfs/home/gouidr/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 96, in run
self.run_epoch()
File "/nfs/home/gouidr/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 112, in run_epoch
self.run_iter(idx, data_batch)
File "/nfs/home/gouidr/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 128, in run_iter
outputs = self.runner.model.train_step(
File "/nfs/home/gouidr/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 114, in train_step
losses = self._run_forward(data, mode='loss') # type: ignore
File "/nfs/home/gouidr/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 340, in _run_forward
results = self(data, mode=mode)
File "/nfs/home/gouidr/anaconda3/envs/openmmlab/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "/nfs/home/gouidr/mmdetection3d/mmdet3d/models/detectors/base.py", line 75, in forward
return self.loss(inputs, data_samples, kwargs)
File "/nfs/home/gouidr/mmdetection3d/mmdet3d/models/detectors/single_stage.py", line 74, in loss
x = self.extract_feat(batch_inputs_dict)
File "/nfs/home/gouidr/mmdetection3d/mmdet3d/models/detectors/voxelnet.py", line 43, in extract_feat
x = self.middle_encoder(voxel_features, voxel_dict['coors'],
File "/nfs/home/gouidr/anaconda3/envs/openmmlab/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "/nfs/home/gouidr/mmdetection3d/mmdet3d/models/middle_encoders/pillar_scatter.py", line 36, in forward
return self.forward_batch(voxel_features, coors, batch_size)
File "/nfs/home/gouidr/mmdetection3d/mmdet3d/models/middle_encoders/pillar_scatter.py", line 86, in forward_batch
this_coors = coors[batch_mask, :]
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.Additional information
I wanted to play with the voxel size values for pointpillars but I get these CUDA error when training the model. Any insight on this ?