Closed seallard closed 4 years ago
It probably makes sense to wait for the networks output to stabilize for XOR? I'm not doing that at the moment. Currently:
While output is inactive or not activated once:
What is the problem with this? It mirrors the NEAT src, right?
First step is to fix the case where signals are dropped, a network with a single hidden node should be able to solve XOR.
"How do I ensure that a network stabilizes before taking its output(s) for a classification problem?
The cheap and dirty way to do this is just to activate n times in a row where n>1, and hope there are not too many loops or long pathways of hidden nodes.
The proper (and quite nice) way to do it is to check every hidden node and output node from one timestep to the next, and see if nothing has changed, or at least not changed within some delta. Once this criterion is met, the output must be stable.
Note that output may not always stabilize in some cases. Also, for continuous control problems, do not check for stabilization as the network never "settles" but rather continuously reacts to a changing environment. Generally, stabilization is used in classification problems, or in board games. "
Things to try:
It seems like some of the structures evolved should be able to solve XOR, so check these things.
Steepening the activation function made it solve XOR for the first time! But it still takes a lot longer than expected and seems to fail quite often. The solution seems to take over the population quickly.
This network (which is the same minimal structure shown above!) solved XOR after 7288 evaluations (50 epochs), using 1 / (1 + exp(-4*self.sum)).
XOR is now solved pretty consistently within 10000 evaluations. The main issues
The networks never solve XOR... The (1,1,0) example is never solved for some reason. It seems like all of the species converge on different networks which can classify each example correctly but that one.
The original NEAT implementation solves XOR in 32 generations on average. Detailed stats from NEAT paper:
"On 100 runs, the first experiment showed that the NEAT system finds a structure for XOR in an average of 32 generations (4,755 networks evaluated, std=2,553). On average a solution network had 2.35 hidden nodes and 7.48 non-disabled connection genes. Since a successful network requires 43at least one hidden unit (figure 4.1b) , NEAT actually found very small networks. NEAT was also very consistent in finding a solution: It did not fail once in 100 simulations. The worst performance took 13,459 evaluations, or about 90 generations (compared to 32 generations on average). The standard deviation for number of nodes used in a solution was 1.11, meaning NEAT consistently used 1 or 2 hidden nodes to build an XOR network."
The actual parameters used are stated in the paper.