Open yaojunr opened 5 years ago
I have the same question as your, I think thetargets > targets[:, i].unsqueeze(1)
should be targets < targets[:, i].unsqueeze(1)
I also think the code of rank loss is not match with the description in the paper. I think loss_p should be 1 + pivot - score
As the pivot column moving from left to right, suppose the union of s1
, pivot
and s2
is equal to the total set of anchor scores, the individual loss term is defined by L = max(0, 1 - pivot + score)
, minimizing the individual loss can be viewed as making 1 - pivot + score
less or equal to 0, which can be further rewrite as pivot - score >= 1
:
For s1
: Let score
be s1
. We have score > pivot, pivot - score < 0
, if the part loss L_score > L_pivot
, then target_p
is equal to True
, the individial loss term contributes to the total loss in order to reverse the order and make sure they have a margin of 1.
For s2
: Let score
be s2
. We have score < pivot, pivot - score > 0
, if the part loss L_score > L_pivot
, then target_p
is equal to True
, the individial loss term contributes to the total loss in order to make sure they have a margin of 1.
My question is: The part loss is used only for constructing a mask and have no contribution to the gradient, why not use the class probabilities of the parts directly?
Hi, I have some question of computing the rank loss. The original code is :
See the code here
targets_p = (targets > targets[:, i].unsqueeze(1)).type(torch.cuda.FloatTensor)
. I think the variabletargets_p
is the mask of target where other value is larger than the index value. However, i thinktargets_p
is just determined by the relative value. The way to computepart_loss
is just reverse the order of the relative value and fetch the specific index. I think just fetch the specific index is enough. Then why compute the value ofpart_loss
?