SOOJEONGKIMM / Paper_log

papers to-read list + issue
0 stars 0 forks source link

ERNIE: Enhanced Language Representation with Informative Entities #11

Closed SOOJEONGKIMM closed 1 year ago

SOOJEONGKIMM commented 1 year ago

https://arxiv.org/pdf/1905.07129.pdf

SOOJEONGKIMM commented 1 year ago

Abstract: Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. We argue that informative entities in KGs can enhance language representation with external knowledge. In this paper, we utilize both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE), which can take full advantage of lexical, syntactic, and knowledge information simultaneously. The experimental results have demonstrated that ERNIE achieves significant improvements on various knowledge-driven tasks, and meanwhile is comparable with the state-of-the-art model BERT on other common NLP tasks. The source code of this paper can be obtained from this https URL

SOOJEONGKIMM commented 1 year ago

Introduction image

(1) Structured Knowledge Encoding

encode the graph structure of KGs with knowledge embedding algorithms like TransE and take informative entity embeddings.

(2) Heterogeneous Information Fusion

ERNIE pre-train on large-scale textual corpora and KGs

SOOJEONGKIMM commented 1 year ago

Methodology image

entity to the first token in its named entity phrase.

basic lexical and syntactic information.

extra token-oriented

integration of the input of tokens and entities. Information fusion layer takes two kinds of input: one is the token embedding, and the other one is the concatenation of the token embedding and entity embedding. After information fusion, it outputs new token embeddings and entity embeddings for the next layer.

SOOJEONGKIMM commented 1 year ago

Knowledgeable Encoder image

SOOJEONGKIMM commented 1 year ago

Pre-training for Injecting Knowledge image image

SOOJEONGKIMM commented 1 year ago

Fine-tuning for Specific Tasks

relation classification - apply the pooling layer to the final output embeddings of the given entity mentions.

(+) add two mark tokens to highlight entity mentions. [HD] and [TL]

SOOJEONGKIMM commented 1 year ago

Experiments

image

Entity Typing

image BERT and ERNIE make full use of both the unsupervised pre-training and manually annotated training data for better entity typing.

informative entities help ERNIE predict the labels more precisely.

SOOJEONGKIMM commented 1 year ago

Relation Classification

image pre-trained language models can provide more information for relation classification than CNN and RNN.

especially on FewRel

extra knowledge helps the model make full use of small training data