Closed amiltonwong closed 1 year ago
Hi, @amiltonwong ,
Thanks for being interested in our work. First, about the first question, I think 12G memory is sufficient for batch size 1 to debug. You can try the PTv2m2
or set enable_amp=True
and check whether you have the same problem. Also, here is my command for debugging. There is no need to edit the batch size in the config file:
python tools/train.py --config-file configs/scannet/pretrain-msc-v1m1-0f-spunet34c-fine-tune.py --num-gpus 1 --options save_path=exp/scannet/debug batch_size=1
Then, about the second question. The number of points is controlled bug Voxelization
(Grid Sampling) and SphereCrop
. Usually, we adopt 0.04m or 0.05m for S3DIS, and crop the point cloud if the number of points is larger than 100,000. You can edit the voxel_size
in Voxelization
and point_max
in SphereCrop
.
I recheck the config. The release config for ptv2m1
set enable_amp=False
to reproduce our original config. But enable_amp=True
is really helpful in saving memory. You can try it.
@Gofinge , thanks a lot for your reply.
enable_amp=True
option is really useful. It saves around 20% GPU usage.
Hi, @Gofinge ,
Thanks for releasing the package. I run the following command:
python tools/train.py --config-file ./configs/s3dis/semseg-ptv2m1-0-base.py --num-gpus 1 --options save_path=exp/s3dis/debug
and got the CUDA out of memory error:I already set the batch size as 1. What's the minimum GPU memory requirement for running "semseg-ptv2m1-0-base.py"? BTW, where could I set the number of input points?
Thanks~