1202kbs / Understanding-NN

Tensorflow tutorial for various Deep Neural Network visualization techniques
MIT License
345 stars 90 forks source link

Question about section 2.3 #5

Closed kirarenctaon closed 5 years ago

kirarenctaon commented 5 years ago

In section 2.3, you described the backprop_dense like this :

def backprop_dense(activation, kernel, bias, relevance):
    W_p = tf.maximum(0., kernel)
    b_p = tf.maximum(0., bias)
    z_p = tf.matmul(activation, W_p) + b_p
    s_p = relevance / z_p
    c_p = tf.matmul(s_p, tf.transpose(W_p))

    W_n = tf.maximum(0., kernel)
    b_n = tf.maximum(0., bias)
    z_n = tf.matmul(activation, W_n) + b_n
    s_n = relevance / z_n
    c_n = tf.matmul(s_n, tf.transpose(W_n))

    return activation * (self.alpha * c_p + (1 - self.alpha) * c_n) 

For negative case, it would be betterr to change "tf.maximun" to "tf.minimun". The LRP class in models_2_3 also uses "tf.minimun".

Btw, your tutorials are really helpful for me. Thanks :)

1202kbs commented 5 years ago

Thank you for pointing that out! I fixed the typo.

I'm glad you found the tutorials helpful!