cazala / synaptic

architecture-free neural network library for node.js and the browser
http://caza.la/synaptic
Other
6.92k stars 666 forks source link

Error doesn't go bellow 0.2 #293

Closed talvasconcelos closed 6 years ago

talvasconcelos commented 6 years ago

I'm trying to use Synaptic for predicting odds/results outcomes. My training set is 18k entries i have the data normalized:

[
 {
  "input": [
   0.177393,
   0.07563,
   0.088933
  ],
  "output": [
   1,
   0,
   0
  ]
 },
 {
  "input": [
   0.173267,
   0.088235,
   0.047141
  ],
  "output": [
   1,
   0,
   0
  ]
 },{...}]

Using Perceptron with (3, 6, 3) 3 inputs, 3 hidden layers and 3 outputs. Rate is 0.01 and error never gets past 0.2. What am i doing wrong?

The outputs are one of 3 words, H, D, A. They were binarized. Does make the output 3 instead of 1?

Thanks, Tiago

ghost commented 6 years ago

OK, so do not binarize your output. I know this is the target you need, but it makes the life of NN's very hard. NN's love floating numbers. Also do not normalize into extremes like 1..0, instead try something like 0.9998 and 0.0001.

In the good old XOR example we like to see the output close to our target like 0.98 or 0.12 but not binary, thats impossible and thats why your error rate does not go down.

Once you have a floating output, use a simple check to make it binary:

if(output > 0.5) result = 1 if(output < 0.5) result = 0

If you still have a high error rate, the NN cannot see a pattern in your data and you may want to break up each of your inputs into a wide range of moving averages so the NN has an easier life to see the pattern over time.

talvasconcelos commented 6 years ago

Changed the values. Output is 1 of 3 values, so 0,1 doesn't cut it. I made it that output is 0, 0.5 or 1. Is this ok? Even so, i can't go bellow 0.166. I'm trying with Perceptron and LSTM. My options are:

rate: 0.01,
error: 0.03

Maybe the NN can't really see the pattern!!

ghost commented 6 years ago

Hmm, is there any "real" output value in your scenario ? In the worst maybe try 0.25 (as 0) and 0.75 (as 1) so you have plenty of room around both extremes for the NN to work with. Still you would post-process and say Output < 0.5 is 0 and >0.5 is 1. Yes, defo use LSTM, 2 hidden layers and MSE cost factor, and use plenty of iterations so the NN can work hard to learn.

If it's still not working, could you post a file with min 1000 training sets somewhere ?

talvasconcelos commented 6 years ago

WOW!!! Changing the output to 0.75, 0.5 and 0.25 made a world of difference!! Running the training now with LSTM(3,2,1) and the first 10000 iteractions came out with an error of 0.042

Thanks a lot for your wisdom @Pummelchen ! Will explore further...