ouenal / scribblekitti

Scribble-Supervised LiDAR Semantic Segmentation, CVPR 2022 (ORAL)
https://ouenal.github.io/scribblekitti/
142 stars 17 forks source link

Training speed #4

Closed jasonwjw closed 2 years ago

jasonwjw commented 2 years ago

Thanks for your amazing work, and I'm care about the time of training consuming.

My GPU is Tesla V100 32G(single). Under your training settings, each iteration consumes around 1.5-2s in STEP 1, and the training time for each epoch is close to 15-16h, repeating 75 epochs for training, it seems time-consuming.

Would you like to share your device setting and the details of time consuming during training(like iteration time and the whole training pipeline)?

ouenal commented 2 years ago

I don't exactly remember, but either used 8 2080Ti's or 4 2080Ti's and doubled the batch accumulation. But the crucial thing is, I only trained for 26 epochs. In our cluster we have time limitations on GPU queues (2 days for the basic queue), thus I usually leave the epoch count very large to continue training until the time limit is reached. You can train much shorter than 75, I doubt you will see any performance increase beyond 26-28 epochs based on your random initialization.