Open liuzonglei0713 opened 4 years ago
This is not correct. If there is no annotation, we need to calculate the classification loss, the regression loss will be zero in this case, but NOT the classification loss. You're training then network on a hard negative sample, meaning you want it NOT to detect any object. So all outputs for the classification should be 0 for each object type.
I see, i am worng.
------------------ 原始邮件 ------------------ 发件人: "yhenon/pytorch-retinanet" <notifications@github.com>; 发送时间: 2020年11月7日(星期六) 凌晨3:30 收件人: "yhenon/pytorch-retinanet"<pytorch-retinanet@noreply.github.com>; 抄送: "梅 兰 竹 菊"<1148953314@qq.com>;"Author"<author@noreply.github.com>; 主题: Re: [yhenon/pytorch-retinanet] the loss file has some wrong (#196)
This is not correct. If there is no annotation, we need to calculate the classification loss, the regression loss will be zero in this case, but NOT the classification loss. You're training then network on a hard negative sample, meaning you want it NOT to detect any object. So all outputs for the classification should be 0 for each object type.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
the following code has some wrong. 'if bbox_annotation.shape[0] == 0:' if the bbox_annotation.shape[0] is zero.that is no label. so we do not computer. we need continue. """ if bbox_annotation.shape[0] == 0: if torch.cuda.is_available(): alpha_factor = torch.ones(classification.shape).cuda() * alpha
""" I think wo shoude write like the following: """ class FocalLoss(nn.Module): def forward(self, classifications, regressions, anchors, annotations): alpha = 0.25 gamma = 2.0 batch_size = classifications.shape[0] classification_losses = [] regression_losses = []
"""