Closed floybix closed 8 years ago
I like how the meat of every function (its let
vector) fits on the screen all at once. It's way easier to hold in my head when I can look at it without scrolling.
(Still reading)
Ha, you're right. Deleted now.
The whole TP mechanism is still up in the air afaik. Currently I have the stable bits input (from predicted source cells) used only for a single threshold comparison, to decide whether a layer is "engaged".
Looks good. Really nice cleanup. :+1:
Separated cell activation from winner cell selection. And cell activation from temporal pooling logic.
A new (obvious?) axiom for the design: selecting winner cells should happen in the learn phase.
Think of the learn phase as on a slower time scale than the activate phase; only in the latter phase do sub-connected (weakly-connected?) synapses have any effect.
Now the selection of winner cells and segments is centralised into one place.
Pulled out the learning-related state into a new :learn-state record.
Started a CHANGELOG.md