In the code above, sum_embeddings and bi_embeddings are both calculated with side_embeddings (L*E). According to the paper, however, sum_embeddings are calculated with side_embeddings ((L+I)E) and bi_embeddings are calculated with another side_embeddings (L*E).
temp_embed = []
for f in range(self.n_fold):
temp_embed.append(tf.sparse_tensor_dense_matmul(A_fold_hat[f], ego_embeddings)) side_embeddings = tf.concat(temp_embed, 0)
sum_embeddings = tf.nn.leaky_relu(tf.matmul(side_embeddings, self.weights['Wgc%d' % k]) + self.weights['bgc%d' % k]) bi_embeddings = tf.multiply(ego_embeddings, side_embeddings) bi_embeddings = tf.nn.leaky_relu(tf.matmul(bi_embeddings, self.weights['Wbi%d' % k]) + self.weights['bbi%d' % k]) ego_embeddings = sum_embeddings + bi_embeddings
In the code above, sum_embeddings and bi_embeddings are both calculated with side_embeddings (L*E). According to the paper, however, sum_embeddings are calculated with side_embeddings ((L+I)E) and bi_embeddings are calculated with another side_embeddings (L*E).
Could you please explain why?