Closed SRL94 closed 2 years ago
Hello,
within each node you would have a phrase (label of the KB-item). So node embeddings are the averaged word embeddings.
We do not materialize these initial node embeddings, but directly compute the matching similarity with the question: https://github.com/PhilippChr/CONVEX/blob/260d02933748abc74b0fc1d317ad46181960c2ce/convex.py#L226-L236
This function implements Equation (1), using cosine similarity.
Hope this answers your question.
Regards, Philipp
How about unknow words?
Unknown words would get a 0 vector. A similarity of 0 is returned. https://github.com/PhilippChr/CONVEX/blob/260d02933748abc74b0fc1d317ad46181960c2ce/library/glove_similarity.py#L34-L35
Hi, in Equation (2), CONVEX initialises the word embeddings of a question by using word2vec but how does CONVEX initialise node embeddings?