Closed HKQX closed 1 year ago
Dear @HKQX,
Thank you for your interest in our work. Doing separate backward passes for different inputs (source images and mixed images) allows freeing the first compute graph after the first backward, which reduces the overall GPU memory consumption compared to one backward pass of the accumulated loss. This is a similar concept to gradient accumulation:
Best, Lukas
Dear @HKQX,
Thank you for your interest in our work. Doing separate backward passes for different inputs (source images and mixed images) allows freeing the first compute graph after the first backward, which reduces the overall GPU memory consumption compared to one backward pass of the accumulated loss. This is a similar concept to gradient accumulation:
- https://pytorch-lightning.readthedocs.io/en/stable/advanced/training_tricks.html#accumulate-gradients
- https://kozodoi.me/python/deep%20learning/pytorch/tutorial/2021/02/19/gradient-accumulation.html#:~:text=Gradient%20accumulation%20modifies%20the%20last,been%20processed%20by%20the%20model.
Best, Lukas
Thank you for your reply.When I run the code, I find that the data of src.loss_imnet_feat_dist is nan. The reason for nan is that during the calculation of Thing-Class ImageNet Feature Distance (FD), this image does not contain these classes [6, 7, 11, 12, 13, 14, 15, 16, 17, 18]. Have you noticed this problem? And the memory usage is close to 12G, 2080ti should not be able to complete the code running,do you need to adjust anything? Looking forward to your reply
Please, have a look at issue #11 regarding nan in the FD loss. I am able to run this repository on an RTX 2080 Ti. It's tight but it fits.
Dear authors, thank you for your outstanding work. In the process of reading the code, I found that the loss is backpropagated separately. In many other works, the loss is backpropagated after accumulation. What's the difference between the two? Looking forward to your reply.