hexiangnan / adversarial_personalized_ranking

Adversarial Learning, Matrix Factorization, Recommendation
https://github.com/hexiangnan/adversarial_personalized_ranking
215 stars 62 forks source link

Confusions about the implementation of "pertubations calculating" #7

Open LiuJiXiong opened 5 years ago

LiuJiXiong commented 5 years ago

Hi, Dr.He Thanks for your good idea proposed in AMF. But I' m a little confused about your implementation of "\partial{ \hat y{uij} (\theta + \delta) } {\delta}". I find that it may use "\partial{ \hat y{uij} (\theta) } {\delta}" to calculate the pertubation as shown in:

self.grad_P, self.grad_Q = tf.gradients(**self.loss**, [self.embedding_P, self.embedding_Q])
# convert the IndexedSlice Data to Dense Tensor
self.grad_P_dense = tf.stop_gradient(self.grad_P)
self.grad_Q_dense = tf.stop_gradient(self.grad_Q)
# normalization: new_grad = (grad / |grad|) * eps
self.update_P = self.delta_P.assign(tf.nn.l2_normalize(self.grad_P_dense, 1) * self.eps)
self.update_Q = self.delta_Q.assign(tf.nn.l2_normalize(self.grad_Q_dense, 1) * self.eps)

because your using loss is "self.loss" rather than "self.opt_loss " while updating the pertubation matrix.

zxm97 commented 5 years ago

Hi, Dr.He Thanks for your good idea proposed in AMF. But I' m a little confused about your implementation of "\partial{ \hat y{uij} (\theta + \delta) } {\delta}". I find that it may use "\partial{ \hat y{uij} (\theta) } {\delta}" to calculate the pertubation as shown in: self.grad_P, self.grad_Q = tf.gradients(self.loss, [self.embedding_P, self.embedding_Q])

convert the IndexedSlice Data to Dense Tensor

self.grad_P_dense = tf.stop_gradient(self.grad_P) self.grad_Q_dense = tf.stop_gradient(self.grad_Q)

normalization: new_grad = (grad / |grad|) * eps

self.update_P = self.delta_P.assign(tf.nn.l2_normalize(self.grad_P_dense, 1) self.eps) self.update_Q = self.delta_Q.assign(tf.nn.l2_normalize(self.grad_Q_dense, 1) self.eps)

because your using loss is "self.loss" rather than "self.opt_loss " while updating the pertubation matrix.

I have the same question. Have you figured it out?