Closed kinnefix closed 6 years ago
Thanks for the kind words!
Each layer has a "bias neuron." It's basically a fake neuron that is always 1 (or -1 or any fixed non-zero constant). By adjusting the weight for it, you can basically move moves the zero point for that layer.
See this for more explanation: https://stackoverflow.com/questions/2480650/role-of-bias-in-neural-networks#2499936
Thanks for your reply. The keyword I'm missing was "bias neuron". With the help of your explanation and reference to other resources, I now can understand the concepts of neural network more deeply. Again, thank you very much!
Your work is totally awsome! The codes seem really efficient! I just would like to ask a question regarding training weights associated with each layers. Let's suppose the value of inputs, hidden neurons (with 1layer) and outputs are 2, 3, and 4 respectively. As I understood, there should be 2x3 + 3x4 weights in total in this case, however your code actually set the number of weights to be (2+1)3 + (3+1)4. I tried to understand why by writing the code from scratch, but I wasn't able to find the answer. I'd really appreciate your answer :)