iamtrask / Grokking-Deep-Learning

this repository accompanies the book "Grokking Deep Learning"
7.43k stars 1.58k forks source link

Inactive Activation gradients #56

Open Seabrand opened 3 years ago

Seabrand commented 3 years ago

Notably in chapter 8, the backpropagation through activation function gradients appear off: if you target the derivative of an activation function for a given input σ'(x), shouldn't you use that input for the gradient instead of the output y = σ(x)? Example: if you calculate layer_1 = relu(np.dot(layer_0,weights_0_1)) in the forward direction, then propagating backward would require layer_1_delta = layer_2_delta.dot(weights_1_2.T) * relu2deriv(np.dot(layer_0,weights_0_1)) i.e. the input at the activation function, and not as suggested layer_1_delta = layer_2_delta.dot(weights_1_2.T) * relu2deriv(layer_1) After all, applying relu2deriv(relu(x)) would yield (x>=0)x>=0, the identity function and actually not change anything. The effects on training are not too big, but it does impact overfitting, the amount of loss and in fact some of the narrative.

AshishPandagre commented 3 years ago

I also have the same doubt. Even Andrew Ng did it the same way as we are expecting. Below are the screenshots I took from Andrew Ng's course.

Screenshot (97) Screenshot (98)