chan8972 / Enabling_Spikebased_Backpropagation

MIT License
33 stars 7 forks source link

Issue about Poisson encoding method #2

Open Yanqi-Chen opened 4 years ago

Yanqi-Chen commented 4 years ago

I notice the implementation of Poisson-distributed spike as below

rand_num = Variable(torch.rand(input.size(0), input.size(1), input.size(2), input.size(3)).cuda())
Poisson_d_input = (torch.abs(input) > rand_num).type(torch.cuda.FloatTensor)
Poisson_d_input = torch.mul(Poisson_d_input, torch.sign(input))

which means that the encoded spikes are triple-valued {0, ±1}. It's a little different from the well-known encoding style with binary value {0, 1}.

I wonder why you choose this way of Poisson encoding.

chan8972 commented 4 years ago

Thank you for asking questions. We found that normalizing inputs to represent zero mean helps to achieve better classification results, especially for SVHN and CIFAR-10 datasets. This spike encoding scheme would be efficiently implemented with an address-event-representation (AER) communication protocol that represents the discrete events using a 4-tuple {x, y, t, p}, consisting of the coordinates: x,y; timestamp: t; and polarity : p. The details of our encoding scheme can be found in section 3.1.4 of our paper (https://www.frontiersin.org/articles/10.3389/fnins.2020.00119/full).

Yanqi-Chen commented 4 years ago

Thanks! I used to be unfamiliar with the AER protocol. Your work introduced the BP algorithm to deeper SNNs. It's a great success!