Open doublespending opened 6 years ago
@doublespending That's a good question.
Generally speaking, the way biases are implemented in neural networks is that there's a bias for every neuron in a forward passing manner.
There are no biases for input neurons, therefore you should have 100 (hidden 1) + 100 (hidden 2) + 2 (output) biases. Every forward-pass neuron has its own bias.
If you count the number of biases you have up there, it should add up to 202.
The code snippet you've provided just adds its bias to the output value of the neuron.
I'm closing this issue since there's been no response in the last 7 days.
I'm reopening this issue, because apparently I can't see things in plain sight.
The bias implementation stores all of the biases for each neuron, but only uses N_layer of them during a forward pass, hence it only uses 1 bias value per layer.
According to the logic of forward_pass2 function of Danku.sol,
total += biases[layer_i];
means that I have two biases if there are only one hidden layer. In other world, the length of array biases only depends on the number of layers.However, one biases is assigned to one neuron in your paper and package "dutils.neural_network". In other world, the length of array biases depends on the number of neuron.
For example, I trained the model with package dutils.neural_network.
According to the logic of foward_pass2 function, I should only get array biases with only two biases.