JD-AI-Research-Silicon-Valley / SACN

End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion
MIT License
113 stars 30 forks source link

Theoretical question - How is translation property for the embeddings maintained? #14

Open kiranramnath007 opened 4 years ago

kiranramnath007 commented 4 years ago

First of all, thanks for sharing your great work!

I am reading through your paper, and I am finding it difficult to understand how the translation property for the embeddings is maintained. I do see that when you remove the reshape operation, every 2 x K convolutional filter becomes a dimension-wise weighted sum of subject and relationship embeddings for each fact triple.

However, since this is followed by A) vectorizing many channels, B) a non-linearity after a matrix multiplication with weight W, and C) inner product with object embeddings, it seems that the final embeddings are no longer translational.

How can I use your architecture but derive embeddings that are translational (i.e. head + rel ~ tail) ? One of my use-cases is highly dependent on the translational property.

Thanks in advance! Kiran

chaoshangcs commented 4 years ago

Hi Kiran, The ConvTransE uses the 1d convolution, as shown in the formula (6). This is a weighted sum operation. So it can keep the translational property. Thanks for your question.