a1da4 / paper-survey

Summary of machine learning papers
32 stars 0 forks source link

Reading: Exploring Numeracy in Word Embeddings #17

Open a1da4 opened 4 years ago

a1da4 commented 4 years ago

0. Paper

@inproceedings{naik-etal-2019-exploring, title = "Exploring Numeracy in Word Embeddings", author = "Naik, Aakanksha and Ravichander, Abhilasha and Rose, Carolyn and Hovy, Eduard", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1329", doi = "10.18653/v1/P19-1329", pages = "3374--3380", }

1. What is it?

This paper shows that the recent embedding model cannot capture math meaning for numbers.

2. What is amazing compared to previous studies?

developed an analysis framework to test 2 properties,

3. Where is the key to technologies and techniques?

They defined the analysis framework,

a property p is defined as a triple (x, x+, _x__) x is closer to x+ than _x__ under p

They set 3 types of _x__

スクリーンショット 2019-09-20 22 35 14

As above, the condition is severe in order of 1,2,3. They used equations as above like below,

4. How did validate it?

They evaluate magnitude and numeration.

The BC-magnitude score is high, but other scores(OVA-MAG, SC-MAG, and Numeration) is low. It means these methods can learn the proximate notion of magnitude, not precisely(completely)

The same result as non-retrained.

5. Is there a discussion?

This work also raises important questions about other categories of word-like tokens that need to be treated like special cases.

6. Which paper should read next?

In EMNLP2019, same paper was published.

a1da4 commented 4 years ago

18