Closed mfcochauxlaberge closed 1 year ago
Hi @mfcochauxlaberge, yes you have a good understanding of the existing code. There are many different ways to convert the signal levels into actions. You could devise other, possibly better ways. To answer your questions:
Each internal neuron gets updated once per simulator cycle and its output value is latched and persists until the next simulator cycle. A neuron that feeds itself uses the latched value from the previous cycle as its input, then it computes and latches a new output value.
The simulator version in the repository interprets negative probabilities as zero probability.
I think it might have worked a little differently than what I said in the video. With no external signal driving it, the internal neuron may have become saturated and supplied a constant mid-level drive to both the MvE and Mrn neurons. When the LPf sensor neuron detected an unobstructed path, it increased the level of the drive to the MvE neuron and caused it to be the dominant output action. When the LPf neuron detected an obstacle, it decreased the drive to the MvE neuron to a level below the constant drive to the Mrn neuron.
Good luck with your own experiments. Let us know how it goes.
@davidrmiller Thanks a lot for the quick response!
I'll leave this issue open for now as I still have a small question about number 3, but I'll try to get the answer from the code. If I can't get it, I'll ask it here.
I think I get it now.
I was still wondering where that signal could come from if no input is attached to the internal neuron.
But it seems like there is this concept of "driven", where a neuron can be marked as "non-driven" and will be given a constant output that will not change. So far my understanding of "driven" was "is getting a signal".
What I understand from the code is that if a neuron is not attached to any input or neuron, it will be given a constant output of 0.5, which is what is happening here.
From genome-neurons.h:
// When a new population is generated and every individual is given a
// neural net, the neuron outputs must be initialized to something:
constexpr float initialNeuronOutput() { return 0.5; }
And from genome.cpp, which is where the driven property is defined:
nnet.neurons.back().driven = (nodeMap[neuronNum].numInputsFromSensorsOrOtherNeurons != 0);
Then I would simply ask: why force an output to neurons that cannot get a signal? If a signal is useful, then evolution would simply make the necessary connections, no?
You got it. At birth, every neuron gets a initial output value by calling initialNeuronOutput(). If a neuron has no inputs, its output value will never change during its lifetime.
The initial neuron output value is defined in a function to make it easier to experiment with different initial values.
In general, artificial neurons are more flexible and trainable if each neuron has a constant, weighted bias signal that it can sum with its other input signals. By giving all newly-born neurons a nonzero output value, then automatically those with no inputs can act as constant bias sources for other neurons, with the expectation that the evolutionary process will adjust the connection weights and select the connections that are useful.
Interesting. I'll keep experimenting. Thanks!
I watched the video on YouTube and it was great. I watched it a few times and I'm trying to re-implement the simulator myself.
I had a question about how signals travel in the neural network. My understanding goes like this:
I currently have the following questions:
(image for question 3)
Thank you!
(I'm still reading the code, so if I find an answer I'll post it here. I'm not comfortable with C++ unfortunately.)