jiphyeonjeon / season3

Jiphyeonjeon Season 3
MIT License
39 stars 6 forks source link

Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence #21

Open jinmang2 opened 2 years ago

jinmang2 commented 2 years ago

집현전 최신반 스터디

Abstract

The logical negation property (LNP), which implies generating different predictions for semantically opposite inputs (p is true iff ¬p is false), is an important property that a trustworthy language model must satisfy. However, much recent evidence shows that large-size pre-trained language models (PLMs) do not satisfy this property. In this paper, we perform experiments using probing tasks to assess PLMs’ LNP understanding. Unlike previous studies that only examined negation expressions, we expand the boundary of the investigation to lexical semantics. Through experiments, we observe that PLMs violate the LNP frequently. To alleviate the issue, we propose a novel intermediate training task, named meaning-matching, designed to directly learn a meaning-text correspondence, instead of relying on the distributional hypothesis. Through multiple experiments, we find that the task enables PLMs to learn lexical semantic information. Also, through fine-tuning experiments on 7 GLUE tasks, we confirm that it is a safe intermediate task that guarantees a similar or better performance of downstream tasks. Finally, we observe that our proposed approach 1 outperforms our previous counterparts despite its time and resource efficiency.

jinmang2 commented 2 years ago
eubinecto commented 2 years ago

abstract 추가합니다!

The logical negation property (LNP), which implies generating different predictions for semantically opposite inputs (p is true iff ¬p is false), is an important property that a trustworthy language model must satisfy. However, much recent evidence shows that large-size pre-trained language models (PLMs) do not satisfy this property. In this paper, we perform experiments using probing tasks to assess PLMs’ LNP understanding. Unlike previous studies that only examined negation expressions, we expand the boundary of the investigation to lexical semantics. Through experiments, we observe that PLMs violate the LNP frequently. To alleviate the issue, we propose a novel intermediate training task, named meaning-matching, designed to directly learn a meaning-text correspondence, instead of relying on the distributional hypothesis. Through multiple experiments, we find that the task enables PLMs to learn lexical semantic information. Also, through fine-tuning experiments on 7 GLUE tasks, we confirm that it is a safe intermediate task that guarantees a similar or better performance of downstream tasks. Finally, we observe that our proposed approach 1 outperforms our previous counterparts despite its time and resource efficiency.

jinmang2 commented 2 years ago

@eubinecto 감사합니다 ㅎㅎ

eubinecto commented 2 years ago

오늘 arxiv에 논문이 publish 되었습니다! https://arxiv.org/abs/2205.03815 링크 남기고 갈게요~!