xiangwang1223 / neural_graph_collaborative_filtering

Neural Graph Collaborative Filtering, SIGIR2019
MIT License
793 stars 262 forks source link

The embedding propagation code seems to be not consistent with the paper #53

Open Dousia opened 3 years ago

Dousia commented 3 years ago

temp_embed = []
for f in range(self.n_fold):
temp_embed.append(tf.sparse_tensor_dense_matmul(A_fold_hat[f], ego_embeddings)) side_embeddings = tf.concat(temp_embed, 0)
sum_embeddings = tf.nn.leaky_relu(tf.matmul(side_embeddings, self.weights['Wgc%d' % k]) + self.weights['bgc%d' % k]) bi_embeddings = tf.multiply(ego_embeddings, side_embeddings) bi_embeddings = tf.nn.leaky_relu(tf.matmul(bi_embeddings, self.weights['Wbi%d' % k]) + self.weights['bbi%d' % k]) ego_embeddings = sum_embeddings + bi_embeddings

In the code above, sum_embeddings and bi_embeddings are both calculated with side_embeddings (L*E). According to the paper, however, sum_embeddings are calculated with side_embeddings ((L+I)E) and bi_embeddings are calculated with another side_embeddings (L*E).

Could you please explain why?

GuoshenLi commented 2 years ago

I agree with you, and also I think the way they normalized the adj_matrix in the code (using mean) is not consistent with the D-1/2 A D-1/2.