Closed skiusp closed 1 year ago
"Type-specific linear transformation" is self-explanatory. Let [64]
be the dimension of hidden states, if type A nodes are associated with feature vectors of length [8]
, then the weight matrix to linearly project type A nodes is of shape [8, 64]
.
After this projection operation, every node's vector representation (regardless of node types) is projected into the same latent vector space of shape [64].
Thank you so much for bringing such a wonderful paper. But I have no idea about applying a type-specific linear transformation for each type of nodes by projecting feature vectors into the same latent factor space, which appears in Equation 1 of the paper. I would appreciate it if you could give me a hint or tell me the answer. Thank you very much.