bermanmaxim / LovaszSoftmax

Code for the Lovász-Softmax loss (CVPR 2018)
http://bmax.im/LovaszSoftmax
MIT License
1.38k stars 269 forks source link

Vairiable data has to be a tensor, but got Variable #4

Closed PkuRainBow closed 6 years ago

PkuRainBow commented 6 years ago

I have solved this problem!

I test the pytorch implementation on my project and I call the lovasz_loss like below:

class CriterionLovaszSoftmax(nn.Module):
    '''
    LovaszSoftmax loss:
        loss functions used to optimize the mIOU directly.
    '''
    def __init__(self, ignore_index=255):
        super(CriterionLovaszSoftmax, self).__init__()
        self.ignore_index = ignore_index

    def forward(self, preds, target):
        n, h, w = target.size(0), target.size(1), target.size(2)
        scale_pred = F.upsample(input=preds, size=(h, w), mode='bilinear')
        prob = F.softmax(scale_pred)
        loss = lovasz_softmax(prob, target, ignore=self.ignore_index)
        return loss

But I got such errors:

File "/home/sdb/semantic-segmentation/utils/criterion.py", line 40, in forward loss = lovasz_softmax(prob, target, ignore=self.ignore_index) File "/home/sdb/semantic-segmentation/utils/lovasz_loss.py", line 166, in lovasz_softmax loss = lovasz_softmax_flat(*flatten_probas(probas, labels, ignore), only_present=only_present) File "/home/sdb/semantic-segmentation/utils/lovasz_loss.py", line 183, in lovasz_softmax_flat errors = (Variable(fg) - probas[:, c]).abs() RuntimeError: Variable data has to be a tensor, but got Variable

Then I checked the implementations for lovasz_softmax_flat function:

def lovasz_softmax_flat(probas, labels, only_present=False):
    """
    Multi-class Lovasz-Softmax loss
      probas: [P, C] Variable, class probabilities at each prediction (between 0 and 1)
      labels: [P] Tensor, ground truth labels (between 0 and C - 1)
      only_present: average only on classes present in ground truth
    """
    C = probas.size(1)
    losses = []
    for c in range(C):
        fg = (labels == c).float() # foreground for class c
        if only_present and fg.sum() == 0:
            continue
        errors = (Variable(fg) - probas[:, c]).abs()
        errors_sorted, perm = torch.sort(errors, 0, descending=True)
        perm = perm.data
        fg_sorted = fg[perm]
        losses.append(torch.dot(errors_sorted, Variable(lovasz_grad(fg_sorted))))
    return mean(losses)

So it seems that we do not need to convert the fg with Variable(fg).

bermanmaxim commented 6 years ago

Hi @PkuRainBow, Indeed, as explained in the doc string, the function expects a Variable prediction and a Tensor ground truth. This makes sense as you cannot backpropagate on the ground truth. Note that Pytorch 0.4 alleviate the distinction between Tensor and Variables, so this distinction should disappear in the future.