chan8972 / Enabling_Spikebased_Backpropagation

MIT License
32 stars 7 forks source link

Forward Function #3

Open tehreemnaqvi opened 4 years ago

tehreemnaqvi commented 4 years ago

Hi,

I am referring to your code. I have some confusion related to forward function.

I'm trying to implement VGG11 using the CIFAR10 dataset but got some dimension errors.

Can you please explain how did you make the dimensions inside forward function like this?

torch.zeros(batch_size, 64, 32, 32, device=device))

When I tried this using VGG11 , got this error?

image

It means my dimensions are not correct.

Below is the snippet of my forward function:

image

chan8972 commented 4 years ago

I do not exactly get what the above RuntimeError means, but I try to answer your question. For example, when a certain layer is defined as "self.cnn11 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1, bias=False)". Then, membrane variable can be defined as follows: "mem_11 = torch.zeros(mini batch size, number of out_channels, row size of featuremap, column size of featuremap)". Once the number of output_channels and featuremap column and row size match with your defined nn.layer, there should be no problem. Hope this helps!

tehreemnaqvi commented 4 years ago

Thank you very much. I got it. In your forward function, what's the function of mask_11. Is it refers to the weight matrix?

mask_11 = Variable(torch.ones(input.size(0), 64, 32, 32).cuda(), requires_grad=False)

chan8972 commented 4 years ago

Mask incorporates the dropout functionality. Mask remembers the random subset of units for entire time window. Please, find a detailed explanation of SNN dropout in section 2.2.2 of our paper (https://www.frontiersin.org/articles/10.3389/fnins.2020.00119/full).

tehreemnaqvi commented 4 years ago

Thank you very much

GhostManMan commented 3 years ago

@tehreemnaqvi Hello, Have you tried to implement VGG16 using the CIFAR10 dataset? Got an issue that the accuracy is constant 10.0 and the loss is 2.3026.