ShannonAI / dice_loss_for_NLP

The repo contains the code of the ACL2020 paper `Dice Loss for Data-imbalanced NLP Tasks`
Apache License 2.0
272 stars 39 forks source link

Dice Loss Error #2

Open albertnanda opened 3 years ago

albertnanda commented 3 years ago

I have two part question,

  1. The example given in the code bugs out i.e. https://github.com/ShannonAI/dice_loss_for_NLP/blob/418d09d285c103176152a97d73f8e7ebcdb1fa49/loss/dice_loss.py#L41 IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
  2. The other question is related to the implementation, say the classifier has perfectly predicted the labels, but there would be still some dice loss because of loss = 1 - ((2 * interection + self.smooth) / (torch.sum(torch.square(flat_input, ), -1) + torch.sum(torch.square(flat_target), -1) + self.smooth)) smooth. Is this the expected behavior or am I missing something.
input = torch.FloatTensor([[1., .0, .0, .0],[0., 1, .0, .0]])
target = torch.LongTensor([0, 1])
loss = DiceLoss(with_logits=False,reduction=None,ohem_ratio=0.)
input.requires_grad=True
output = loss(input, target)

Output tensor([1.9998, 1.9998], grad_fn=)

xiaoya-li commented 3 years ago

Hey, thanks for asking. Response to quesiton2: As shown in https://github.com/ShannonAI/dice_loss_for_NLP/blob/418d09d285c103176152a97d73f8e7ebcdb1fa49/tasks/tnews/train.py#L139 we recommend using the following setting for multi-class tasks:

$ loss_fct = DiceLoss(square_denominator=True, with_logits=False, index_label_position=True,
                        smooth=1, ohem_ratio=0, alpha=0.01, reduction="none")
albertnanda commented 3 years ago

@xiaoya-li : Let me try it out, also can you recommend the settings for Multi-Label classification and NER task. NER Task:


inp = torch.FloatTensor([[[.1,.2,.3,.4]]*4,[[.5,.5,0,0]]*4]) #2 sentences, 4 words and 4 tags per word
target = torch.LongTensor([[0,1,1,2],[0,3,2,3]]) #tags 0,3, 2 sentences and 4 words
kk19990709 commented 2 years ago

I have also met question 1, do you have any solution?@albertnanda