Open felipemello1 opened 2 years ago
Thank you very much for your suggestion, but sometimes although there exists a equivalent value, the optimization method might not find it. We will test the performance in the future, but due to some personal heathy problems we cannot test it now. If you have good result, it will be great to contribute a pull request!
Hi, I was checking the convolution, and apparently there are expensive layers there that can be completely eliminated:
The code is:
Problem 1:
Is it necessary to run a fully connected layer over embeddings? As far as I understand, the embeddings can naturally learn the same projection emb = self.fc_c(emb). This becomes even more expensive when we think that the conv might have only 20 types of edges, but it is running this fully connected layer hundreds of thousands of times for the same repeated 20 types.
Problem 2:
e_feat, feat_src and feat_dst are the products of an MLP. Is it necessary to multiply it by a constant (called attention here)? I guess the MLP can naturally achieve the same value. We can just say that:
If you remove these two parts, then attention can be calculated just as right + left + edge_emb
(graph.apply_edges(fn.u_add_v('feat_src', 'feat_dst', 'e_feat')))
, without doing all these transformations beforehand.