a1da4 / paper-survey

Summary of machine learning papers
32 stars 0 forks source link

Reading: Retrofitting Word Vectors to Semantic Lexicons #210

Open a1da4 opened 2 years ago

a1da4 commented 2 years ago

0. Paper

@inproceedings{faruqui-etal-2015-retrofitting, title = "Retrofitting Word Vectors to Semantic Lexicons", author = "Faruqui, Manaal and Dodge, Jesse and Jauhar, Sujay Kumar and Dyer, Chris and Hovy, Eduard and Smith, Noah A.", booktitle = "Proceedings of the 2015 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies", month = may # "{--}" # jun, year = "2015", address = "Denver, Colorado", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N15-1184", doi = "10.3115/v1/N15-1184", pages = "1606--1615", }

1. What is it?

They proposed a new approach, post-processing, to bring words closer together

2. What is amazing compared to previous works?

Recent methods train the model from scratch by adding task-specific functions (ad-hoc). In this paper, the post-processing approach can be used on any pre-trained model (post-hoc).

3. Where is the key to technologies and techniques?

スクリーンショット 2021-09-29 2 28 34

To obtain linked-vectors (white), they post-process the pre-trained word vectors (gray)

The objective function is below:

The model is fine-tuned via online training (update per word). スクリーンショット 2021-09-29 2 34 08

4. How did evaluate it?

Word Similarity tasks:

5. Is there a discussion?

Compare with other training methods:

6. Which paper should read next?

a1da4 commented 2 years ago

211

+antonymy

a1da4 commented 2 years ago

212

+hypernymy