Open Christian-B opened 5 years ago
if you increase n_neurons for example to 255 you get: [ERROR] (post_events.h: 47): Unable to allocate global STDP structures - Out of DTCM: Try reducing the number of neurons per core to fix this problem
I noticed that in https://github.com/SpiNNakerManchester/sPyNNaker/blob/master/neural_modelling/src/neuron/plasticity/stdp/post_events.h post_event_history_t post_event_history = (post_event_history_t) spin1_malloc( n_neurons * sizeof(post_event_history_t));
sizeof(post_event_history_t) = 100
Is see that sizeof(post_event_history_t) will be 4 + 16 4 + 16 (0 2 or 4) in this case 2
4 from uint32_t count_minus_one 16 4 from uint32_t times[MAX_POST_SYNAPTIC_EVENTS] 16 (0 2 or 4) from post_trace_t traces[MAX_POST_SYNAPTIC_EVENTS];
log_info("post_trace_t %u", sizeof(post_trace_t)); gives: [INFO] (post_events.h: 40): post_trace_t 2
Yet: sPyNNaker/neural_modelling/makefiles/neuron/IF_curr_exp_stdp_mad_pair_additive/Makefile has: TIMING_DEPENDENCE = $(NEURON_DIR)/neuron/plasticity/stdp/timing_dependence/timing_nearest_pair_impl.c which has: typedef struct post_trace_t { } post_trace_t;
That confuses me.
sPyNNaker/neural_modelling/makefiles/neuron/IF_curr_exp_stdp_mad_pair_additive/Makefile has: TIMING_DEPENDENCE = $(NEURON_DIR)/neuron/plasticity/stdp/timing_dependence/timing_pair_impl.c
sPyNNaker/neural_modelling/makefiles/neuron/IF_curr_exp_stdp_mad_nearest_pair_additive/Makefile has: TIMING_DEPENDENCE = $(NEURON_DIR)/neuron/plasticity/stdp/timing_dependence/timing_nearest_pair_impl.c
These are two independent things. It looks like you were looking at timing_pair in this case, which as a uint16_t as post_trace_t, which is indeed 2 bytes. Thus the post_history_t structure is ~25Kb with 255 neurons on the core. This is unfortunate but not unexpected. The DTCM calculations in Python should take this into account and then the partitioner would reduce the neuron count, which would have a significant effect in this case!
reimplented get_max_atoms_per_core https://github.com/SpiNNakerManchester/PACMAN/pull/206 does NOT fix this.
So as far as I can see, the partitioner only takes into account DTCM usage related to the neuron model components and the neuron recorder. There seems to have been a plan to do this for the synapse manager (and by extension for STDP I guess) but it hasn't happened yet (there's a TODO at https://github.com/SpiNNakerManchester/sPyNNaker/blob/270a7b62a931587f8691d5f3638ea962e1736932/spynnaker/pyNN/models/neuron/synaptic_manager.py#L201). Is this something we want to look at ahead of the release or is it something we should leave for the future?
future. we could also add a master student to do this...... or a summer student
Just having a look at this issue again since there have been a few updates since we last looked at it. It appears now as though we can go to a maximum of n_neurons = 222 in IntroLab/learning/stdp.py
before any problems start; the failures then appear to be similar from then onwards.
confirmed still errors at 240 : Could not initialise DMA buffers (spike_processing.c: 454) at 250: Unable to allocate global STDP structures - Out of DTCM: Try reducing the number of neurons per core to fix this problem (post_events.h: 91)
confirmed this is still an issue
Script: https://github.com/SpiNNakerManchester/IntroLab/stdp.py With n_neurons = 207
in_spikes_initialize_spike_buffer(incoming_spike_buffer_size) return False
Which looks like the dtcm cals are incorrect!