Closed MrChenFeng closed 2 years ago
We reproduced all label-smoothing experiments, not copied from existing works. This is the code that we are using.
`class CrossEntropyLabelSmooth(nn.Module):
def __init__(self, num_classes, epsilon=0.1):
super(CrossEntropyLabelSmooth, self).__init__()
self.num_classes = num_classes
self.epsilon = epsilon
self.logsoftmax = nn.LogSoftmax(dim=1).cuda()
def forward(self, inputs, targets):
"""
Args:
inputs: prediction matrix (before softmax) with shape (batch_size, num_classes)
targets: ground truth labels with shape (num_classes)
"""
log_probs = self.logsoftmax(inputs)
targets = torch.zeros_like(log_probs).scatter_(1, targets.unsqueeze(1), 1)
targets = (1 - self.epsilon) * targets + self.epsilon / self.num_classes
loss = (- targets * log_probs).mean(0).sum()
return loss`
Thanks for your great work. The results of Labels moothing reported in your paper is surprisingly high. I wonder do you reproduce such results or copied from existing works? If former, could you share the code of it?
Thanks a lot!