a1da4 / paper-survey

Summary of machine learning papers
32 stars 0 forks source link

Reading: Exploring Vector Spaces for Semantic Relations #120

Open a1da4 opened 4 years ago

a1da4 commented 4 years ago

0. Paper

@inproceedings{gabor-etal-2017-exploring, title = "Exploring Vector Spaces for Semantic Relations", author = {G{\'a}bor, Kata and Zargayouna, Ha{\"\i}fa and Tellier, Isabelle and Buscaldi, Davide and Charnois, Thierry}, booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", month = sep, year = "2017", address = "Copenhagen, Denmark", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D17-1193", doi = "10.18653/v1/D17-1193", pages = "1814--1823", abstract = "Word embeddings are used with success for a variety of tasks involving lexical semantic similarities between individual words. Using unsupervised methods and just cosine similarity, encouraging results were obtained for analogical similarities. In this paper, we explore the potential of pre-trained word embeddings to identify generic types of semantic relations in an unsupervised experiment. We propose a new relational similarity measure based on the combination of word2vec{'}s CBOW input and output vectors which outperforms concurrent vector representations, when used for unsupervised clustering on SemEval 2010 Relation Classification data.", }

1. What is it?

The authors proposed a method for computing similarity.

2. What is amazing compared to previous works?

They proposed to add, multiplicate, and use context vectors for computing similarities.

3. Where is the key to technologies and techniques?

In computing a similarity between words (a1, a2) and (b1, b2) they proposed some methods.

4. How did evaluate it?

They used their similarity methods(3.) as the input for the clustering model.

As below, in-out methods outperform the baseline. スクリーンショット 2020-09-08 14 25 28

5. Is there a discussion?

6. Which paper should read next?

Using context vectors is based on this paper: [A Simple Word Embedding Model for Lexical Substitution]

a1da4 commented 4 years ago

121 A Simple Word Embedding Model for Lexical Substitution