tianweiy / CenterPoint

MIT License
1.88k stars 456 forks source link

spconv/src/spconv/indice.cu 125 #2

Closed DeepakVellampalli closed 4 years ago

DeepakVellampalli commented 4 years ago

Hi,

I was tring to train with confif file "nusc_centerpoint_voxelnet_01voxel.py" with 1 GPU and sweep=1. I encountered the crash during training. Kindly help.

File "/home/Nuscene_Top/CenterPoint/tools/train.py", line 128, in main logger=logger, File "/home/Nuscene_Top/CenterPoint/det3d/torchie/apis/train.py", line 381, in train_detector trainer.run(data_loaders, cfg.workflow, cfg.total_epochs, local_rank=cfg.local_rank) File "/home/Nuscene_Top/CenterPoint/det3d/torchie/trainer/trainer.py", line 538, in run epoch_runner(data_loaders[i], self.epoch, kwargs) File "/home/Nuscene_Top/CenterPoint/det3d/torchie/trainer/trainer.py", line 405, in train self.model, data_batch, train_mode=True, kwargs File "/home/Nuscene_Top/CenterPoint/det3d/torchie/trainer/trainer.py", line 363, in batch_processor_inline losses = model(example, return_loss=True) File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, kwargs) File "/home/Nuscene_Top/CenterPoint/det3d/models/detectors/voxelnet.py", line 47, in forward x = self.extract_feat(data) File "/home/Nuscene_Top/CenterPoint/det3d/models/detectors/voxelnet.py", line 24, in extract_feat input_features, data["coors"], data["batch_size"], data["input_shape"] File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, *kwargs) File "/home/Nuscene_Top/CenterPoint/det3d/models/backbones/scn.py", line 364, in forward ret = self.middle_conv(ret) File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(input, kwargs) File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/spconv/modules.py", line 123, in forward input = module(input) File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/spconv/conv.py", line 155, in forward self.stride, self.padding, self.dilation, self.output_padding, self.subm, self.transposed, grid=input.grid) File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/spconv/ops.py", line 89, in get_indice_pairs stride, padding, dilation, out_padding, int(subm), int(transpose)) RuntimeError: /home/Nuscene_Top/spconv/src/spconv/indice.cu 125 cuda execution failed with error 2

tianweiy commented 4 years ago

Can you tell me your torch, cuda, spconv version? Also what other changes(if any) did you make to the code? Unfortunately, I can't reproduce this error. (I guess it is a few hours into the training?)

tianweiy commented 4 years ago

cuda execution failed with error 2 Uhm, I am not 100 percent sure, but the cuda error 2 seems to mean that you are out of memory.

DeepakVellampalli commented 4 years ago

Sorry for late reply. I was using torch version 1.1 spconv 1.0 vwrsion. I replicated the same setup as you mentioned in the installation instructions. I tried your pointpillars succesfully without any hurdles. But the config file uses spconv module and spconv module is crashing. Moreover iam traing with sweeps=1. Hence i commented lines 87-97 in https://github.com/tianweiy/CenterPoint/blob/master/det3d/datasets/pipelines/loading.py

Apart from this change, there is no change in code. Kindly help

tianweiy commented 4 years ago

cuda execution failed with error 2 Uhm, I am not 100 percent sure, but the cuda error 2 seems to mean that you are out of memory.

You don't need to comment loading files. Just change the config nsweep field to 1. Also I suspect it is gpu out of memory issue from the error log, can you check this ?

AbdeslemSmahi commented 4 years ago

cuda execution failed with error 2 Uhm, I am not 100 percent sure, but the cuda error 2 seems to mean that you are out of memory.

You don't need to comment loading files. Just change the config nsweep field to 1. Also I suspect it is gpu out of memory issue from the error log, can you check this ?

how to reduce memory usage in test phase?

tianweiy commented 4 years ago

@AbdeslemSmahi the simplest way is to add a --speed_test flag during testing. This will by default use batch size 1. Not sure how to go beyond this

AbdeslemSmahi commented 4 years ago

@AbdeslemSmahi the simplest way is to add a --speed_test flag during testing. This will by default use batch size 1. Not sure how to go beyond this

Even that didn't work.

tianweiy commented 4 years ago

probably need to get a larger gpu then... or try pointpillars model which take less memory.

tianweiy commented 4 years ago

close for now. Feel free to reopen if you still have questions.

ZiyuXiong commented 3 years ago

@AbdeslemSmahi @tianweiy Hi,I also encountered the same question with config nusc_centerpoint_voxelnet_dcn_0075voxel_flip_circle_nms.py, but it works fine with config nusc_centerpoint_pp_dcn_02voxel_circle_nms.py, have you solved this problem? All the training process is on 2 Titan V(4 Titan V also tested, failed either), and I noticed that the first GPU seem to use more GPU mem than the second GPU. is there any chance that the distributed launch assigns the dataloading only to the first GPU?

tianweiy commented 3 years ago

Hi, 0075 voxelnet will definitely take much more memory than pp. Can you train the 0.1 voxel size model ? You can also decrease the batch size a bit, I don't think this matter much for performance.

For the distributed data parallel stuff, does your model work with a single gpu ?

Also it seems spconv(voxelnet) is quite weird for Titan v. Basically, I try to train voxelnet on Titan xp, Titan rtx, 2070/2080, v100, Titan v. All other gpu works but for Titanv I can't use even batch size 2 for a kitti model. I feel this is a bug with spconv. Do let me know if your titanv works well with spconv

ZiyuXiong commented 3 years ago

@tianweiy Thank you for your reply. I followed your advice and the results are:

  1. voxel_size=0.1, batch_size=4, Titan V, nproc_per_node=2, failed (cuda execution failed with error 2)
  2. voxel_size=0.1, batch_size=4, Titan V, single Titan V, failed (cuda execution failed with error 2)
  3. voxel_size=0.075(nusc_centerpoint_voxelnet_dcn_0075voxel_flip_circle_nms.py), batch_size=4, Titan xp, nproc_per_node=2, failed (GPU out of memory)
  4. voxel_size=0.075(nusc_centerpoint_voxelnet_dcn_0075voxel_flip_circle_nms.py), batch_size=4, Titan xp, nproc_per_node=2, succeed image

it seems that spconv cannot work on Titan V(when voxelnet involved), and it indeed takes large amount of memory to run the config with small voxel size. But now I reduce the bacth size to 2 and it worked, so nothing wired happen for the moment. Thank you again for your timely and detailed reply!

tianweiy commented 3 years ago

Sure, good luck with your project.