When we use FPN, we get the following error on torch==1.13.1.
It looks like this is due to an in-place update of laterals at FPN.forward() in fpn.py.
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 64, 20, 20]], which is output 0 of LeakyReluBackward1, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Thank you for great OSS!
When we use FPN, we get the following error on torch==1.13.1. It looks like this is due to an in-place update of
laterals
atFPN.forward()
in fpn.py.Here is the difference of yaml file.