Notably in chapter 8, the backpropagation through activation function gradients appear off: if you target the derivative of an activation function for a given input σ'(x), shouldn't you use that input for the gradient instead of the output y = σ(x)?
Example: if you calculate
layer_1 = relu(np.dot(layer_0,weights_0_1))
in the forward direction, then propagating backward would require
layer_1_delta = layer_2_delta.dot(weights_1_2.T) * relu2deriv(np.dot(layer_0,weights_0_1))
i.e. the input at the activation function, and not as suggested
layer_1_delta = layer_2_delta.dot(weights_1_2.T) * relu2deriv(layer_1)
After all, applying relu2deriv(relu(x)) would yield (x>=0)x>=0, the identity function and actually not change anything.
The effects on training are not too big, but it does impact overfitting, the amount of loss and in fact some of the narrative.
Notably in chapter 8, the backpropagation through activation function gradients appear off: if you target the derivative of an activation function for a given input σ'(x), shouldn't you use that input for the gradient instead of the output y = σ(x)? Example: if you calculate
layer_1 = relu(np.dot(layer_0,weights_0_1))
in the forward direction, then propagating backward would requirelayer_1_delta = layer_2_delta.dot(weights_1_2.T) * relu2deriv(np.dot(layer_0,weights_0_1))
i.e. the input at the activation function, and not as suggestedlayer_1_delta = layer_2_delta.dot(weights_1_2.T) * relu2deriv(layer_1)
After all, applyingrelu2deriv(relu(x))
would yield (x>=0)x>=0, the identity function and actually not change anything. The effects on training are not too big, but it does impact overfitting, the amount of loss and in fact some of the narrative.