a1da4 / paper-survey

Summary of machine learning papers
32 stars 0 forks source link

Reading: Enhancing Contextual Word Representations Using Embedding of Neighboring Entities in Knowledge Graphs #247

Open a1da4 opened 1 year ago

a1da4 commented 1 year ago

0. Paper

1. What is it?

They propose a method considering KG entities and KG neighbors via the attention mechanism.

スクリーンショット 2023-02-09 15 30 18

2. What is amazing compared to previous works?

Their model considers not only the relations connected in the KG, but also KG neighbors that are not connected. スクリーンショット 2023-02-09 15 28 49

3. Where is the key to technologies and techniques?

3.1 Knowledge Graph Attention

スクリーンショット 2023-02-09 15 35 01

Basically, the relational structure (header $h$, relation $r$, tail $t$) of KG embeddings $v_{KG}(h), vr(r), v{KG}(t)$ is defined as follows: $$v_{KG}(t) \approx \phih (v{KG}(h), v_{r}(r))$$

In this paper, they define the structure using the attention mechanism.

$$Q^i = v{KG}(w^i) \oplus v{PLM}(w^i)$$

$$K^{i, j} = V^{i, j} = v{KG}(w^i) \oplus v{PLM}(w^i)\ \ (\textrm{if}\ j = 0)$$

$$K^{i, j} = V^{i, j} = \phi (v_{KG}(w^i), vr(r^i)) \oplus v{PLM}(w^i)\ \ (\textrm{if}\ j > 0)$$

スクリーンショット 2023-02-09 15 58 25 The method obtains PLM word embeddings $v{PLM}(w^i)$ and KG entity embeddings $v{KG}(w^i)$ as in Figure 5.

3.2 Overview

スクリーンショット 2023-02-09 15 35 19

4. How did evaluate it?

From these results, the proposed (RoBERTa-based) method achieves higher performance than prior works.

5. Is there a discussion?

6. Which paper should read next?