Open Vaticinator opened 4 years ago
So it is possible, but for weight training you need + = instead of - =.
layer_2_delta = (walk_vs_stop[i:i+1] - layer_2)
layer_1_delta = layer_2_delta.dot(weights_1_2.T)*relu2deriv(layer_1)
weights_1_2 += alpha * layer_1.T.dot(layer_2_delta)
weights_0_1 += alpha * layer_0.T.dot(layer_1_delta)
#### is same #######
layer_2_delta = (layer_2 - walk_vs_stop[i:i+1])
layer_1_delta=layer_2_delta.dot(weights_1_2.T)*relu2deriv(layer_1)
weights_1_2 -= alpha * layer_1.T.dot(layer_2_delta)
weights_0_1 -= alpha * layer_0.T.dot(layer_1_delta)
###
the two code result is same, but i think the last code is logic right.
Section "Backpropagation in Code": There is:
layer_2_delta = (walk_vs_stop[i:i+1] - layer_2)
I belive it should be:layer_2_delta = (layer_2 - walk_vs_stop[i:i+1])