@inproceedings{ijcai2018-634,
title = {Biased Random Walk based Social Regularization for Word Embeddings},
author = {Ziqian Zeng and Xin Liu and Yangqiu Song},
booktitle = {Proceedings of the Twenty-Seventh International Joint Conference on
Artificial Intelligence, {IJCAI-18}},
publisher = {International Joint Conferences on Artificial Intelligence Organization},
pages = {4560--4566},
year = {2018},
month = {7},
doi = {10.24963/ijcai.2018/634},
url = {https://doi.org/10.24963/ijcai.2018/634},
}
Precious works apply to one-hop neighbors (friends, followers), so there are texts input by users with more friends than texts input by users with fewer friends.
To avoid this limitation, their method obtain neighbors with a biased random walk.
3. Where is the key to technologies and techniques?
use all documents, train CBOW-based word embeddings
to obtain user embeddings, they train node2vec (one-hop neighbors are similar).
finally, the training objective function is defined as below:
From this model, a user-specific word vector (user i, word j) is defined.
3.2 Biased Second-Order Random Walk
The above model (3.1) only used one-hop neighbors, number of friends of users has a direct impact on training.
To avoid this limitation, they applied a biased random walk (second-order random walk).
0. Paper
@inproceedings{ijcai2018-634, title = {Biased Random Walk based Social Regularization for Word Embeddings}, author = {Ziqian Zeng and Xin Liu and Yangqiu Song}, booktitle = {Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, {IJCAI-18}}, publisher = {International Joint Conferences on Artificial Intelligence Organization},
pages = {4560--4566}, year = {2018}, month = {7}, doi = {10.24963/ijcai.2018/634}, url = {https://doi.org/10.24963/ijcai.2018/634}, }
1. What is it?
They updated socialized word representation with a biased random walk.
2. What is amazing compared to previous works?
Precious works apply to one-hop neighbors (friends, followers), so there are texts input by users with more friends than texts input by users with fewer friends. To avoid this limitation, their method obtain neighbors with a biased random walk.
3. Where is the key to technologies and techniques?
3.1 Socialized Word Embeddings (2017)
use all documents, train CBOW-based word embeddings![スクリーンショット 2021-12-31 14 23 37](https://user-images.githubusercontent.com/45454055/147805009-471ef764-d05b-40ab-a829-f8ccecc5d3bb.png)
to obtain user embeddings, they train node2vec (one-hop neighbors are similar).![スクリーンショット 2021-12-31 14 23 56](https://user-images.githubusercontent.com/45454055/147805018-dc7f5696-7b59-4abd-a991-26caef934ac1.png)
finally, the training objective function is defined as below:![スクリーンショット 2021-12-31 14 24 15](https://user-images.githubusercontent.com/45454055/147805027-ca956771-d196-4b6a-9220-c54af8916ff8.png)
From this model, a user-specific word vector (user i, word j) is defined.![スクリーンショット 2021-12-31 14 27 41](https://user-images.githubusercontent.com/45454055/147805144-33091c4a-485e-4db5-add9-3be92b6ae1ab.png)
3.2 Biased Second-Order Random Walk
The above model (3.1) only used one-hop neighbors, number of friends of users has a direct impact on training. To avoid this limitation, they applied a biased random walk (second-order random walk).
![スクリーンショット 2021-12-31 14 33 20](https://user-images.githubusercontent.com/45454055/147805357-65938fb9-cd46-40da-8a85-81c3e4394622.png)
A bias term is defined as follows.
4. How did evaluate it?
5. Is there a discussion?
6. Which paper should read next?