a1da4 / paper-survey

Summary of machine learning papers
32 stars 0 forks source link

Reading: Quantifying Uncertainties in Natural Language Processing Tasks #19

Open a1da4 opened 4 years ago

a1da4 commented 4 years ago

0. Paper

Quantifying Uncertainties in Natural Language Processing Tasks

1. What is it?

The authors used uncertainty into NLP tasks and checked the improvement by using uncertainty.

2. What is amazing compared to previous studies?

They used uncertainty

3. Where is the key to technologies and techniques?

They defined the total uncertainty is the sum of model and data uncertainty.

Model Uncertainty

It is defined the variance of expectation by using MC dropout.

Data Uncertainty

It is defined the expectation of variance. The assumption that data uncertainty is dependent on the input.

4. How did validate it?

They tried 3 NLP tasks.

Sentiment Analysis

The result shows that using Model and Data uncertainty improved model performance. Especially, Model uncertainty is significant.

Named Entity Recognition

The result shows that using Data uncertainty improved score significantly. Model uncertainty improved just 1%, because using MC dropout and chunk evaluation. (The method and evaluation are not fit)

Language Modeling

The result shows that using both uncertainty improved score.

5. Is there a discussion?

スクリーンショット 2019-09-23 15 21 26

These graphs show that data uncertainty can measure entropy and difficulty.

6. Which paper should read next?

Analyze model uncertainty What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?

a1da4 commented 4 years ago

22