Open tehreemnaqvi opened 4 years ago
I do not exactly get what the above RuntimeError means, but I try to answer your question. For example, when a certain layer is defined as "self.cnn11 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1, bias=False)". Then, membrane variable can be defined as follows: "mem_11 = torch.zeros(mini batch size, number of out_channels, row size of featuremap, column size of featuremap)". Once the number of output_channels and featuremap column and row size match with your defined nn.layer, there should be no problem. Hope this helps!
Thank you very much. I got it. In your forward function, what's the function of mask_11. Is it refers to the weight matrix?
mask_11 = Variable(torch.ones(input.size(0), 64, 32, 32).cuda(), requires_grad=False)
Mask incorporates the dropout functionality. Mask remembers the random subset of units for entire time window. Please, find a detailed explanation of SNN dropout in section 2.2.2 of our paper (https://www.frontiersin.org/articles/10.3389/fnins.2020.00119/full).
Thank you very much
@tehreemnaqvi Hello, Have you tried to implement VGG16 using the CIFAR10 dataset? Got an issue that the accuracy is constant 10.0 and the loss is 2.3026.
Hi,
I am referring to your code. I have some confusion related to forward function.
I'm trying to implement VGG11 using the CIFAR10 dataset but got some dimension errors.
Can you please explain how did you make the dimensions inside forward function like this?
torch.zeros(batch_size, 64, 32, 32, device=device))
When I tried this using VGG11 , got this error?
It means my dimensions are not correct.
Below is the snippet of my forward function: