Open Oolev opened 3 years ago
Hello, I haven't used GraphSAGE but do follow the project. My understanding, based on the published papers, is as follows. Consider a node A that you are preparing the training samples for. Then a positive pair (A, B) is a pair where B is actually similar to A in some semantic sense: E.g. it is within n-steps from A, it shares some structural similarities with A (eg struct2vec). While a negative pair (A, C) is a pair where C is not similar to A, e.g. you randomly pick from the sub-graph that is over n steps removed from A.
The above approach was first used in training word2vec models (building embeddings for words.)
Hope the above helps.
Hello, I haven't used GraphSAGE but do follow the project. My understanding, based on the published papers, is as follows. Consider a node A that you are preparing the training samples for. Then a positive pair (A, B) is a pair where B is actually similar to A in some semantic sense: E.g. it is within n-steps from A, it shares some structural similarities with A (eg struct2vec). While a negative pair (A, C) is a pair where C is not similar to A, e.g. you randomly pick from the sub-graph that is over n steps removed from A.
The above approach was first used in training word2vec models (building embeddings for words.)
Hope the above helps.
Thanks a lot! Much appreciated. It's clear now.
Forgive my lack of knowledge, but I have a simple question regarding the provided demo for node representation learning with GraphSAGE and the use of an unsupervised sampler:
What do you mean by positive and negative node pairs? Could you point me to some useful resources to better understand this concept?