Closed wagenaartje closed 7 years ago
Deep networks are way more complex than shallow ones, thus take longer to train (although they can create way more complex abstractions as well). You just need to give the network more time, ie (200k iterations instead of 20k):
var myNetwork = new synaptic.Architect.Perceptron(2, 10,10 , 1);
var trainer = new synaptic.Trainer(myNetwork);
var trainingSet = [
{
input: [0,0],
output: [0]
},
{
input: [0,1],
output: [1]
},
{
input: [1,0],
output: [1]
},
{
input: [1,1],
output: [0]
},
];
trainer.train(trainingSet,{
rate: .1,
iterations: 200000,
error: .005,
log:1,
shuffle: true
});
You will see it converging around 75-100k:
I'm using the example from the Trainer wiki:
But it is NOT converging. When I remove one of the 10 neuron layers, it does work.
Run it here