facebookresearch / PyTorch-BigGraph

Generate embeddings from large-scale graph-structured data.
https://torchbiggraph.readthedocs.io/
Other
3.38k stars 449 forks source link

Forming sentence embedding #56

Closed Arjunsankarlal closed 5 years ago

Arjunsankarlal commented 5 years ago

Sorry if it sounds silly and not the right place to ask it. But I felt it could solve it better than posting it anywhere else. I am little confused with the concept, Graph embedding gives a numerical representation to the graph. So if I can create sentence as a graph of words(embedding of them taken from Word2Vec or FastText or Glove) as nodes connected based on occurrence and distance between the nodes as cosine similarity of them. With this can I consider the graph embedding as a the sentence embedding of the sentence? And with the similar concept can I construct embedding for paragraphs and documents with their respective embeddings? Will this be a right approach?

lw commented 5 years ago

I believe you are confused about what a graph embedding is. The objective isn't to assign one vector to the whole graph, but rather to learn a vector for each vertex. PBG does not provide any way of aggregating the embedding of different nodes in order to obtain a single embedding for the whole graph. This is not a goal.

You may have been induced into confusion by graph neural networks, which are quite a different thing. To try to highlight the difference:

If you need more information you should read our documentation (this page and this page) and perhaps take a look at @adamlerer's talk at SysML, which you can find here.

laifi commented 5 years ago

Hi @Arjunsankarlal , i think what are you saying can make sense if you are using a shallow encoding approach such as using SkipGram to encode the structure of a graph (DeepWalk) . with this approach , nodes can be seen as words and sub-graphs (set of nodes) as sentences (set of words) . You can even train doc2vec model to encode sub-graphs as sentence embedding . I might add to what @lerks mentioned , GNNs can be unsupervised too and the features could be just the hot-encoding of the nodes ( GraphSage for example ) .

adamlerer commented 5 years ago

@lerks is correct, PBG takes as input a graph and outputs embeddings of the nodes; it does not construct an embedding of a graph. I will point out though that the terminology is a little overloaded because there are other methods that construct embeddings of graphs (often for things like embeddings of protein molecules) - they often work by doing our kind of graph embedding and then aggregating the node embeddings in some way, but there are other approaches.

Anyway, I agree that PBG is not the best approach for this problem. AFAIK the state of the art for sentence embeddings would be BERT(one example I found by Google search is https://github.com/lonePatient/bert-sentence-similarity-pytorch )

lw commented 5 years ago

I'm closing this as the OP has acknowledged the recent posts but there has been no further activity for almost two weeks. Feel free to reopen.