VCIP-RGBD / DFormer

[ICLR 2024] DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation
https://yinbow.github.io/Projects/DFormer/index.html
MIT License
152 stars 24 forks source link

ImageNet-1k pretrain training time #18

Closed EunNamCho closed 7 months ago

EunNamCho commented 7 months ago

I read your paper with great interest. In particular, it was interesting in that an RGB-D pretrain model was built.

I checked that 8 NVIDIA 3090s were used when training DFormer with ImageNet-1K. I would also like to conduct additional experiments on ImageNet-1K. Can I know the training time required for each size?

yinbow commented 7 months ago

Thanks for your attention to our work!

The pretraining time on our 8*3090GPUs for the DFormer are:

Model DFormer-T DFormer-S DFormer-B DFormer-L
Trianing Time ~40h ~49h ~72h ~85h

The training time also depends on the CPUs. More CPUs can accelerate the training duration.

EunNamCho commented 7 months ago

Thank you for your quick response, and thank you for writing a thesis on a good topic. I get a lot of inspiration!!