Closed PaulMuadDib closed 5 years ago
I'm guessing it's because of this for loop:
for neur in range(len(self.target.reward)):
...
If that loops through all neurons in the layer (which happens every timestep), then it will probably slow things down quite a bit.
You can use Python's built-in cProfile
to check what is causing the slowdown.
Thank you, Indeed, the following is much faster... and enable me to test this way of backpropagating the reward in a deep SNN:
`# Compute weight update.
PostPre = self.nu[0] * self.target.reward * self.e_trace
self.connection.w += PostPre
# Find presynaptic neurons with the largest STDP update
values, indices = PostPre.sort(dim=0, descending=True)
# Punish the presynaptic neurons inducing the min (negative) update
to_punish = torch.nonzero(self.target.reward < 0).view(-1)
self.source.reward[indices[-1,to_punish].view(-1)] = self.target.reward[to_punish]
# Reward the presynaptic neurons inducing the max (positive) update
to_reward = torch.nonzero(self.target.reward > 0).view(-1)
self.source.reward[indices[0,to_reward].view(-1)] = self.target.reward[to_reward]`
Hi Bindsnet, (Far from synapses with conduction delays), I am trying to backpropagate a reward from the output layer to the previous ones (after having added an attribute "reward" to the layers and) using a modified version of the MSTDPET learning rule, according to:
and:
I also call the synapses updates in reverse order (since the reward are computed from the output layer to the previous ones) in the "run" defintion in networks.
But this little addition takes so much time to compute (from 1.5sec to 36sec on the gpu): would you have an idea why ?