Open Ploppz opened 5 years ago
thanks @Ploppz, I noticed this behavior, and I'm agree with your solution. We should be able to send a predefined net to the algorithm. For me, NEAT it's only the base for the project, If we find better ways to do things we should add to the project.
Edit: I am talking about ways to boost the algorithm in general. We could also make it possible to start with a predefined genome of course. Edit 2: I rewrote this after reading the paper.
The paper splits neurons into input, hidden and output neurons.
I think we should in any case start with n_inputs + n_outputs
number of neurons.
Then some questions are:
whether we should discriminate between input, hidden and output neurons. Specifically, I'm thinking that maybe there should not be added any connections among input neurons, and among output neurons. But maybe I'm wrong, I'm not sure how the Ctrnn works, maybe such connections do have some effect.
Whether we should start with connections between input and output. In the paper it seems they do that. But it also seems ok to not do that, because in the paper they also talk about starting as minimal as possible (and in this case, it would probably be beneficial to not allow connections among input and output neuron groups).
I'm exactly in the same point. When I started the project I was thinking that the algorithm should be as much standard as possible, I mean that the user should not configure the net, only call the algorithm with inputs and outputs and get results. I think that connect inputs with outputs by default will improve the performance, as you say this, implementation take a lot of generations to make any improvement.
I think Ctrnn discriminates inputs and hidden neurons. This section it's the most obscure for me, I did some tests changing it, and I'm not sure it's well implemented. In the readme of the function_aproximation branch there is a link to Ctrnn paper.
Both in the XOR example and in my own attempts, I noticed something: The first 100-200 generations, the output of
organism.activate
is0.0
. So we are essentially waiting for any connection between input and output, while no evolution other than random mutation can happen because fitness will be constant for the organisms that only output0.0
.So I suggest either starting from a slightly more connected starting point, or finding a way to make the alg more 'eager' to add connections early on (but maybe this is not in line with the original alg), or let the user specify a starting point (that is, a genome or NN architecture to start with). Maybe it would already be a good improvement to connect the one start neuron with all inputs and outputs.