Open PhilippPlank opened 1 year ago
Thanks for adding this item! I would include in the user story that if all the delays off a given neuron are different, the compiler will create an appropriate mix of axonal and synaptic delays to implement the mix efficiently. The most important point is that any mixture of various delays will get optimized "behind the scenes".
Formulated this way, I'm afraid a solution to this is a long way off since this would require the compiler to somehow interfere with the actual configuration of the processes while compiling them. Two elements are notable here:
It's conceivable to enhance the compiler to do that. But I'm afraid this wouldn't be an immediate priority because the problem can probably also be solved much more easily independent of the compiler: All that's needed seems to be a standalone utility, that give a delay matrix, factors out and separates that delay matrix into a common axonal delay vector and a pure synaptic delay vector. The user can simply call this utility and plug the axonal delays into the NeuronProc and the pure SynDelays into the ConnProc.
This solution probably gets 95% of the problem done at a fraction of the effort.
That factoring method sounds reasonable. I understand this is more of a future wish-list item, but I wanted to get it on the table because it seem like a natural thing for a neural software stack to do. I don't see any issue/drawback with a compiler rewriting a configuration, since code rearrangement is basically what every compiler middle-end does. The important this is that it be functionally equivalent to the user's original spec.
The factoring process is the interesting part of this problem. The approach you describe would give the first stage of delay (axonal). If we were making a purely axonal version, we could treat each neuron output as a distribution tree and factor all the common delay stages. Without creating a special new neuron model, we can simply insert a regular neuron for each stage of delay.
User story
As a user, I want to be able to specify synaptic delays without thinking about the depth of delays. The compiler finds the most efficient way to implement long delays and handles mixed delays off a single neuron. This means the compiler would figure out if an axonal delay is possible/more efficient than synaptic delays, e.g., if all synapses of a target neuron have the same synaptic delay, the compiler would configure an axonal delay instead.
Originally posted by @frothga in https://github.com/lava-nc/lava/issues/237#issuecomment-1421525374
Conditions of satisfaction
Acceptance tests