a1da4 / paper-survey

Summary of machine learning papers
32 stars 0 forks source link

Reading: Mind the Gap: Assessing Temporal Generalization in Neural Language Models #243

Open a1da4 opened 1 year ago

a1da4 commented 1 year ago

0. Paper

1. What is it?

2. What is amazing compared to previous works?

3. Where is the key to technologies and techniques?

They define a new task, "Temporal Generalization"

4. How did evaluate it?

Internal

スクリーンショット 2022-11-19 22 53 16

From Figure 1, the degradation of TIME-STRATIFIED models occurs over time.

External

スクリーンショット 2022-11-19 22 57 42

Models:

Figure 4 shows that the performance of models away from 2019 is also degraded.

5. Is there a discussion?

They analyze ways to mitigate this degradation.

スクリーンショット 2022-11-19 23 02 46

From Figure 3, increasing the parameters of the pre-training model is ineffective.

スクリーンショット 2022-11-19 23 04 09

Figure 5 shows that using dynamic evaluation (online training) reduces the speed of the degradation.

6. Which paper should read next?