lhoyer / DAFormer

[CVPR22] Official Implementation of DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation
Other
466 stars 91 forks source link

Losses are backpropagated separately #48

Closed HKQX closed 1 year ago

HKQX commented 2 years ago

Dear authors, thank you for your outstanding work. In the process of reading the code, I found that the loss is backpropagated separately. In many other works, the loss is backpropagated after accumulation. What's the difference between the two? Looking forward to your reply.

lhoyer commented 2 years ago

Dear @HKQX,

Thank you for your interest in our work. Doing separate backward passes for different inputs (source images and mixed images) allows freeing the first compute graph after the first backward, which reduces the overall GPU memory consumption compared to one backward pass of the accumulated loss. This is a similar concept to gradient accumulation:

Best, Lukas

HKQX commented 1 year ago

Dear @HKQX,

Thank you for your interest in our work. Doing separate backward passes for different inputs (source images and mixed images) allows freeing the first compute graph after the first backward, which reduces the overall GPU memory consumption compared to one backward pass of the accumulated loss. This is a similar concept to gradient accumulation:

Best, Lukas

Thank you for your reply.When I run the code, I find that the data of src.loss_imnet_feat_dist is nan. The reason for nan is that during the calculation of Thing-Class ImageNet Feature Distance (FD), this image does not contain these classes [6, 7, 11, 12, 13, 14, 15, 16, 17, 18]. Have you noticed this problem? And the memory usage is close to 12G, 2080ti should not be able to complete the code running,do you need to adjust anything? Looking forward to your reply image

lhoyer commented 1 year ago

Please, have a look at issue #11 regarding nan in the FD loss. I am able to run this repository on an RTX 2080 Ti. It's tight but it fits.