GiorgosXou / NeuralNetworks

A resource-conscious neural network implementation for MCUs
MIT License
70 stars 21 forks source link

Why not work with 10 bit input data? #1

Open nazariiixa opened 4 years ago

nazariiixa commented 4 years ago

I have 10 bit input data like this const double inputs[110][8] = { {540,131,48,3,0,0,0,0}, {624,167,63,15,0,0,0,0}, {736,224,96,31,0,0,0,0},... but after learning output is the same for exemple 0.8215888 0.8215888 0.8215888 ... after i divide for 1024 i have data like this const double inputs[110][8] = { {0.52734375,0.1279296875,0.046875,0.0029296875,0,0,0,0}, {0.609375,0.1630859375,0.0615234375,0.0146484375,0,0,0,0}, {0.71875,0.21875,0.09375,0.0302734375,0,0,0,0},... and it works perfectly...

How to do without dividing the input data???

nazariiixa commented 4 years ago

The first two methods do not work, the output values ​​are the same. Yes, everything works with normalization, but this is a microcontroller, and there is always not enough processor time, and there’s also an extra division ... May be problem is in float datatype? As i know in arduino float=double="as large as 3.4028235E+38 and as low as -3.4028235E+38" only... not 1024... microcontroller esp32, my code based on "Backpropagation_double_Xor" i change here only inputs, expectedOutput, and layers.

GiorgosXou commented 4 years ago

@nazariiixa

Introduction:

First of all Thanks for the insight, (but i deleted my previous comment because i realised why the issue existed) and also i am really sorry for my late reply :/ ...

Reason:

After Viewing my code i realised that it is because of no normalization of 10bit input data (most probably i will add this feature in future updates·) :

It has to do with the Backpropagation/Training-phase and the Fisrt-inputs "If you use an algorithm like resilient backpropagation to estimate the weights of the neural network, then it makes no difference. The reason is because it uses the sign of the gradient, not its magnitude, when changing the weights in the direction of whatever minimizes your error. This is the default algorithm for the neuralnet package in R, by the way.

When does it make a difference? When you are using traditional backpropagation with sigmoid activation functions, it can saturate the sigmoid derivative."

Solution:

To solve that issue you can do two things , to expiriment with the default Normalization Function of arduino (aka Maping) which i dont know yet if it works with floats or doubles... or! as solution 2 (which is the the best solution as i think/i know...), is to just normalize data by using this code below:

Const float MaxInput = 1000; // maximum input value in your array Const float MinInput = 0; // minimum input value in your array

float Normalize(float &Input_i) //pass Input_i By Reference { return (Input_i - MinInput)/(MaxInput-MinInput); }

inputs[110][8] = { {540,131,48,3,0,0,0,0},... should be something like this after: inputs[110][8] = { {0.540,0.131,0.048,0.003,0,0,0,0},...

Ending:

·Thanks you very much for pointing it out (: , i should have thought about it .. but its ok anyways (: ... I wish you good luck and a nice day

Wish i have helped you (:

nazariiixa commented 4 years ago

Ok, thank you!

GiorgosXou commented 4 years ago

it is :

"_return (Inputi - MinInput)/(MaxInput-MinInput); "

i had by mistake :

"_return (Inputi - MaxInput)/(MaxInput-MinInput);"

i've edited it though