Based on the idea from Numenta of minibursts from predicted cells causing extended depolarisation via metabotropic receptors.
No more temporal pooling excitation. Instead, synapses from predicted
cells excite their target cells over multiple time steps.
No more union pooling; activation level is constant.
First level layers run exactly the same algorithm as higher layers:
Proximal synapses grow to any active synapses, not just stable ones.
Winner cells remain the same in continuing active columns unless reset;
we may rely on an external timing signal to distinguish repeats.
Learn on winner cells only when they become active (even at first level).
But might revisit this to learn auto-associatively for pattern completion.
Testing informally on the second-level-motor demo gives plausible results. The activated higher layer columns include some of those from previous predicted states. Obviously needs a lot more testing and development.
Based on the idea from Numenta of minibursts from predicted cells causing extended depolarisation via metabotropic receptors.
Testing informally on the second-level-motor demo gives plausible results. The activated higher layer columns include some of those from previous predicted states. Obviously needs a lot more testing and development.