ICT-GIMLab / SeHGNN

94 stars 16 forks source link

About Feature Projection #2

Closed S-rz closed 2 years ago

S-rz commented 2 years ago

Hi, thanks for your significant work, and I'm a little confused about the Multi-layer Feature Projection. Does SeHGNN use different MLP for different meta-paths? Specifically, suppose there are two meta-paths, does SeHGNN use two MLP? In your code implementation, I observed that there seems to be only one MLP, which is different from the paper.

Yangxc13 commented 2 years ago

Thank you for your attention.

As described in the paper, SeHGNN uses different MLPs for different metapaths. However, if we use multiple torch.nn.Linear() layers and invoke them one by one during training, it will bring extra unnecessary time costs. So we utilize a small trick for acceleration.

The core function is torch.einsum('bcm,cmn->bcn', x, self.W) in class Conv1d1x1, where x is the propagated features after neighbor aggregation of shape [batch_size, num_metapaths, in_feature_dim], and W is the trainable weight parameter of shape [num_metapaths, in_feature_dim, out_feature_dim]. The execution result of this function is equivalent to that of independent multiple MLP layers.

S-rz commented 2 years ago

Thanks for your work and reply!