Open Sebastian-X opened 4 years ago
I understand that you use a mask to avoid computing IoUs for these inappropriate rois while computing CIoU, but it couldn't prevent the Pytorch autograde function from backpropagating through these rois.
In fact, it seems that this problem will occur when the IoU of certain bounding box is 0.
Hello, thanks for your code. Refering to your code, I reimplemented CIoU loss in my faster-rcnn framework, but got Nan gradients in the first iteration of training. Tracking the Nan values, I found this problem will occur when the RPN module predicts inappropriate rois, e.g. [200, 599, 300, 599] which has the same y1, y2 values. In this case, the gradients of corresponding prediction boxes will be Nan. Have you ever encountered this kind of problem before?