danieljakubovitz / Jacobian_Regularization

Jacobian Regularization for improved DNN robustness to adversarial attacks
3 stars 0 forks source link

Only input-output Jacobian of the first image in a batch has value #1

Open Yujun-Shi opened 5 years ago

Yujun-Shi commented 5 years ago

Hi there, I added the following lines to the original code to print the value of the input-output jacobian, and found that only the input-output jacobian of the first image in a batch has value during training. (My tensorflow version is 1.7.0)

with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())
  for i in range(num_of_iterations + num_of_post_processing_iterations):
    # some code
    if i % 100 == 0:
      # shape: (10, 500, 784)
      Jacobian_numpy = Jacobian.eval(feed_dict={x: X_to_train, y_: Y_to_train, keep_prob: 1.0})
      # shape: (500, 10, 784), which is (N, output_dim, input_dim)
      Jacobian_numpy = Jacobian_numpy.transpose((1,0,2))
      print(Jacobian_numpy[0]) # has some value for every entry of the matrix
      print(Jacobian_numpy[1]) # all 0 and the same for any batch index >= 1

Although the parameters of the network could of course still be regularized in this situation (only using one image for every step to regularize jacobian), it's not exactly consistent with the paper.

TakeByEarn commented 5 years ago

i guess this is a simplification of the average jacobian_loss, author use the first data of jacobian loss instead of use tf.reduce_mean( tf.reduce_sum( jacobian_loss)), he want to use one singlee of jacobian to apporximate the acerage jacobian average loss.

TakeByEarn commented 5 years ago

if you calculate all the data's jacobian loss ,then get average jacobian loss, it woule take much more time to do calculation.