Closed lzhnb closed 3 years ago
I train the model with V100 (32G); If you use one card, it requires about 4 days to finish. The bottleneck is the data processing (including data and its label). I also train it on nuScenes and its speed is much faster (nuScenes 40000 points v.s. SemanticKITTI 140000).
Thanks
Thanks for your amazing work, and I'm care about the time of training consuming.
My GPU is RTX 3090(single), I found the maximum of memory occupancy is around 16G, and each iteration consumes around 1.5~2s. And the iteration times for each epoch is close to 10000, repeating 40 epochs for training, it seems time-consuming.
Would you like to share your device setting and the details of time consuming during training(like iteration time and the whole training pipeline, which is mixed with the evaluation)?