Oblynx / HierarchicalTemporalMemory.jl

A simple, high-level Julia implementation of Numenta HTM algorithms
https://oblynx.github.io/HierarchicalTemporalMemory.jl
MIT License
21 stars 3 forks source link

Add learn switch to step! #83

Closed Oblynx closed 2 years ago

Oblynx commented 2 years ago

There is a valid use case of stimulating the temporal memory without adapting its synapses. This allows modifying its predictive/winning neurons, which will change the next output.

Example:

A= Region(SPParams(), TMParams())
x= bitrand(500)
x2= bitrand(500)  # another input

# Unexpected input, no predictive neurons, massive bursting
unexpected_x= A(x)

[step!(A,x) for t=1:10]
# Expected input. Due to previous stimulation, learned recurrent synapses are likely to predict
# the next stimulus, resulting in much less excitation
expected_x= A(x)

# therefore, this will be true:
count(expected_x) < count(unexpected_x)

# an unexpected input will break the series
step!(A,x2)
# and stimulating again with x will be unexpected, causing bursting
unexpected_x_afterlearn= A(x)

# however, the connections that allow to expect x are still there
# and therefore, stimulating with x once again should produce a very similar output as before
step!(A,x)
expected_x_2= A(x)
expected_x_2 ≈ expected_x  # approximately same

# step!(learn= false)
# this will allow us to receive an output very similar to `expected_x_2`, but without adapting the synapses further:
step!(A,x, learn=false)
expected_x_3= A(x)
expected_x_3 ≈ expected_x_2  # approximately same

The advantage of step!(A,x, learn=false) is that, since the synapses aren't adapted, it allows less "disruptive" probing of the learned sequence by only changing the set of predictive neurons.