Zzh-tju / DIoU-SSD-pytorch

Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression (AAAI 2020)
GNU General Public License v3.0
87 stars 24 forks source link

some doubts about the forward loss compute #7

Closed cs-heibao closed 4 years ago

cs-heibao commented 4 years ago

here the C-IOU loss computation also need the positive index? in my understanding, if we still consider positive sample during c-iou loss compute, why the paper said this kind of loss can improve the conditions: anchors with none overlaps with gt compared with IOU-Loss and G-IOU Loss, thanks if self.loss == 'SmoothL1': loss_l = F.smooth_l1_loss(loc_p, loc_t, reduction='sum') else: giou_priors = priors.data.unsqueeze(0).expand_as(loc_data) loss_l = self.gious(loc_p,loc_t,giou_priors[pos_idx].view(-1, 4))

Zzh-tju commented 4 years ago

As we know, convolution is a sequence of local operations. Its receptive field is limited. During bbox regression, we cannot learn the box that is too far away. If we force models to learn distant objects, it will damage the network. So in practice, it is necessary to select positive samples. And this is the reason that there is limited room for improvement of all these bbox regression losses.

cs-heibao commented 4 years ago

@Zzh-tju So, IOU-Loss、G-IOU Loss and D-IOU( or C-IOU)Loss all only works for anchors which exist overlap with gt through different ways. The paper D-IOU Loss specially consider other three directions(overlap,central point distance and aspect ratio), and then accelerate the bbox regression with better results as well?

Zzh-tju commented 4 years ago

yes, that is CIoU loss

feixiangdekaka commented 4 years ago

IOU 51.01
LGIoU 51.06

LDIoU 51.31

LCIoU 51.44 The improvement for SSD is relatively small, 0.43 point.