a1da4 / paper-survey

Summary of machine learning papers
32 stars 0 forks source link

Reading: On the Transformaion of Latent Space in Fine-Tuned NLP Models #257

Open a1da4 opened 1 year ago

a1da4 commented 1 year ago

0. Paper

1. What is it?

In this paper, authors further evaluated the methods they proposed in the past (https://aclanthology.org/2022.naacl-main.225) with multiple frameworks.

2. What is amazing compared to previous works?

Previous works try to analyze fine-tuned representations using supervised classification tasks.

Recently, these authors proposed a method for analyzing a relationship between fine-tuned representations and human-defined information (part-of-speech, morph, or chunk)

In that work, they proposed an alignment score, which evaluate how many concept $C_2$ (e.g. NN: noun, singular or mass tagged words) contains words $w$ from concept $C_1$ (e.g. each cluster from fine-tuned LM). スクリーンショット 2023-05-01 13 24 01 スクリーンショット 2023-05-01 13 24 39 In the experiment, they evaluate how many concepts (human-defined) fine-tuned model can align ($\theta = 0.9$) Figure 2 from the paper shows that the rate of aligned concepts $$\frac{Number\ of\ concepts\ Eq.1 = 1}{Number\ of\ concepts}$$ スクリーンショット 2023-05-01 13 35 31

This paper is an upgraded version of the above paper.

3. Where is the key to technologies and techniques?

スクリーンショット 2023-05-01 13 08 38

They adapt concept matching score (https://aclanthology.org/2022.naacl-main.225) for

4. How did evaluate it?

4.1 Embedding Space (clusters)

スクリーンショット 2023-05-01 13 09 05

Figure 3 shows that:

4.2 Human-defined (pos, morph, chunk)

スクリーンショット 2023-05-01 13 09 32

Figure 4 shows that models forget the POS information in the upper layers (POS is less important for sentence classification tasks).

4.3 Task-specific (positive / negative)

スクリーンショット 2023-05-01 13 09 48

Figure 5 shows that models learn positive / negative tags in their upper layers.

5. Is there a discussion?

From Figures 3, 4, and 5, only the ALBERT model behaves differently from other models. They concluded that the cause was the Cross-Layer Paremeter Sharing possessed by ALBERT.

6. Which paper should read next?