AkiraTOSEI / ML_papers

ML_paper_summary(in Japanese)
5 stars 1 forks source link

Dice Loss for Data-imbalanced NLP Tasks #41

Open AkiraTOSEI opened 4 years ago

AkiraTOSEI commented 4 years ago

TL;DR

NLP classification tasks often use F1 scores for evaluation, but they use cross entropy for optimization, and if using unbalanced data, there exists large gap between them. Therefore, they proposed a loss of DSC that can be interpreted as smoothing out F1 and multiplied by a factor that reduces the coefficient of easily classifiable samples to zero. They confirmed that the loss was effective across a number of models and datasets. image

Why it matters:

Paper URL

https://arxiv.org/abs/1911.02855

Submission Dates(yyyy/mm/dd)

2019/11/07

Authors and institutions

Xiaoya Li, Xiaofei Sun, Yuxian Meng, Junjun Liang, Fei Wu, Jiwei Li

Methods

Results

image

Comments

ACL2020