Open breznak opened 5 years ago
CC @ctrl-z-9000-times
For me this is a low priority because RAM is historically inexpensive. I think that the CPU is more often the bottleneck.
A quick update:
createSynapse
, destroySynapse
and createSegment
, destroySegment
in Connections. For me this is a low priority because RAM is historically inexpensive. I think that the CPU is more often the bottleneck.
in that thread HDD/model size was a practical problem. Also, we never had a model running for example one year continuously.
I'm also not interested in the practical limitations, but
Spatial pooler calls only createSegment, never releases it. See if that's correct.
Yes this is correct. At initialization the Spatial Pooler creates all its synapses and segments. At run time it neither create nor destroys any segments/synapses. At run time it has a fixed memory usage.
Part of this could be addressed in #466 SP using synapse pruning & syn competition PR #584
Hi together,
I'm running a model over 13 trillion data samples to simulate a streaming analysis. After about 3% of the data, the model used up my whole memory and crashes at tm.compute(...)
with a MemoryError: bad allocation
.
I've looked in the HTM Forum and I came across this issue, but it seems like it's still an open task.
Have you implemented the synapse drop out in some way? Do you have any experience with HTM streaming analysis?
Thanks in advance.
@N3rv0us thanks for reporting this.
I think this is one that @breznak or @ctrl-z-9000-times would have to address.
Hi @N3rv0us ,
running a model over 13 trillion data samples
very interesting data size! yes, this is a known issue, we've been running "online learning" HTMs but I never approached these sizes.
Have you implemented the synapse drop out in some way?
there's a param as synaptic decay
or alike. But it never prunes the actual space in mem.
There are recently merge/in progress PRs on synapse pruning, that should let you prune the mem. I'd say default is OFF.
used up my whole memory and crashes
even with the pruning off, the model should use up all of its avail synapses/mem and then start reusing it - the crash is unexpected.
and crashes at tm.compute(...)
ideally if you can a stack trace with crash in Debug mode, but I guess it's pracically hard to get.
Can you try emulating the problem by setting low limits on num synapses, num segments, ... This will cause your model to use up its resources much sooner. - if it crashes - problem is in our code and is easier to replicate early.
If it does not crash, you can have just too large model and your HW (RAM) cannot satisfy that. Then you'd need to lower the settings for your model.
I'll be happy to look at this later next week.
segment and synapse pruning is implemented in #601 for Connections, therefore available to any of our algorithms (SP,TM). Apparently it is not being used by default as it causes our strict deterministic runs to fail in synapse pruning in SP.
If you can live with that, please try enabling both synapse & segment pruning. As a follow up, we should fix the determinism with synapses pruning and set it to on by default.
I ran into an interesting read from Subutai:
I think the issue has been discussed here.
To experiment with this:
createSynapse()
s