davidrmiller / biosim4

Biological evolution simulator
Other
3.1k stars 435 forks source link

Evaluating loops in the network #56

Open dpellegr opened 2 years ago

dpellegr commented 2 years ago

Hi,

thank you for the inspiring video and the nicely written source code which I am thoroughly studying.

I am having issues understanding how to proper evaluate a network with no layers which therefore allows for circular loops. The simplest example is in this frame from your video:

Screenshot from 2022-01-11 18-38-01

where to update the value of N1 I would need the value of N0 and to update N0 I need N1. These loops may involve many neurons and be very hard to spot and disentangle.

The only simple way out that I see is by initializing all the neurons to zero and update them using their values in the previous time step. But I am worried that this will make the network very "slow" in the sense that propagating information from sensors to actuators may take a very long number of steps. Is this how it actually works?

Many thanks for your clarifications!

davidrmiller commented 2 years ago

Hi @dpellegr , you're thinking in the right direction -- each neuron's output gets updated exactly once during each simulator step and its output is then latched and persists until the next sim step. A neuron's inputs are taken from the latched outputs of the other neurons. That means that depending on the order in which they get evaluated, an input value to a neuron might be another neuron's output value latched earlier in the same sim step or latched during the previous sim step. I imagine that feedback loops can could act as state machines or oscillate over a number of sim steps. Also see related discussions in #47 and #18.

dpellegr commented 2 years ago

Thank you for your answer!

This point is still a bit perplexing for me:

depending on the order in which they get evaluated, an input value to a neuron might be another neuron's output value latched earlier in the same sim step or latched during the previous sim step.

I would have thought about doing the stepping process by creating a const copy of the neurons (with the new inputs) and updating from it, so that the time steps remain fully separated. The fact that the current time step is mixed with the previous, albeit in a consistent way, sounds a bit strange to me. But maybe everything is just absorbed back by the evolution process and does not make any difference in the end, so one can just pick the fastest implementation.

davidrmiller commented 2 years ago

But maybe everything is just absorbed back by the evolution process and does not make any difference in the end, so one can just pick the fastest implementation.

You expressed that better than I could have!

JohnMasen commented 1 month ago

I would call this "frame", the input is from last frame for self-link. This creates a "short-term memory system". If we add an action item which dumps the values of a certain chain to a persistent storage and a source which restores value from it, are we creating a long-term memory system?