Open kasey- opened 5 years ago
I would like ReLU in genann too, but the back propagation is also something I don't understand here :(
Yes, In the code derivate of sigmoid "ddxσ(x)=σ(x)(1−σ(x))" is only implemented. I think we have to write a generic function of derivatives so that, we can add other activation functions like tanh and Relu check the code here https://github.com/kasey-/genann/blob/27c4c4288728791def0c5fd175c1c3999057ad9d/genann.c#L335 If you agree I can also work on this.
Hi everyone, To actually implement Relu and linear what exactly should we be looking at other than the back propagation derivative? Would it also require any additional functions like how sigmoid has genann_act_sigmoid_cached and genann_init_sigmoid_lookup? Any advice...
Hi everyone, To actually implement Relu and linear what exactly should we be looking at other than the back propagation derivative? [snip]
It's right there at the top of this bug report thread, double inline genann_act_relu
. (To be clear: It still doesn't work using that, because the derivative is hardcoded to sigmoid).
Hello,
I started to implement the relu function for the genann library on a fork under my name (https://github.com/kasey-/genann) before sending you a PR:
But I am a bit lost in the way you compute the back propagation of the neural network. The derivative of relu formula is trivial
(a > 0.0) ? 1.0 : 0.0
But I cannot understand were I should plug-it inside your formula as I do not understand how do you compute your back propagation. Did you implemented only the derivate of the sigmoid ?