Closed YUKEKEJAN closed 3 months ago
Thanks for your attention. We used 4xA40 (48G) GPUs to train on the Cityscapes dataset with a ResNet101 backbone and a batch size of 16. Under this setting, the memory usage was close to the maximum capacity of the GPUs. There is a potential risk of running out of memory if the system environment, GPU drivers, and software versions differ. However, this issue should not arise with the ResNet50 backbone. Could you share the detailed config file so that we can better analyze the situation?
Thank you very much for your reply. I have already resolved this issue
Hello, I noticed that your paper used 4 A40 GPUS for training the Cityscapes dataset, with batch processing of 16 and image size of 801 × 801, and ResNet101 as the backbone network. And I used 4 A6000, with memory similar to A40, batch size set to 8, image size of 801 × 801, and ResNet50 as the backbone network for training. The results showed that there was not enough memory