The number of hidden neurons should be defined. Changing the activation
functions to linear. A 3-layer NN with linear activation units is equivalent to
a 2-layer NN with nonlinear activation units. My tests has shown better
accuracy with linear activation units than logistic activation units.
Another addition is a random seed so that different NN weights are generated at
each run. I am not sure if netlab handles this, but, it seems that netlab3.3
does not have this feature.
Finally, we need few hidden neurons/units. The value nHidden=10 is a good
choice.
I would also suggest to reduce the defaults.epochs from 500 to 200 to speedup
the training. I am using 100 now and it works pretty fine.
With very best regards,
Mohammed S. Al-Rawi,
Visual neuroscience lab, IBILI, University of Coimbra, Portugal
al-rawi(aaattt)uc(dddooottt)pt
With very best regards,
Mohammed S. Al-Rawi
Original issue reported on code.google.com by ms.alr...@gmail.com on 23 Jan 2014 at 10:59
Original issue reported on code.google.com by
ms.alr...@gmail.com
on 23 Jan 2014 at 10:59Attachments: