Closed AdamDHines closed 11 months ago
Hello, the error message indicates that the memory requirements of your model might not be compatible with those available on the chip.
Thanks, it was indeed the model size was too big - I wasn't fully aware of the memory restraints. I significantly reduced my model size and was able to run an inference on it.
I'm in the process of attempting to convert a pre-existing torch network to run on the speck2fdevkit and am running into some trouble mapping it onto the device. The full code to my network can be found here.
My network is pre-trained and is a basic ANN multiplying inputs by weights across 2
nn.Linear
layers (I don't perform any convolution in mynn.Sequential
. I was able to succesfully convert the model to a sinabs model and run a basic inferencing model on my GPU with the following code:This gives me an expected output, albeit not as accurate when compared to running not as an SNN but as an ANN - but it runs and I can get meaningful output by using spike rasters in this model. My input is a [1, 784] tensor where
self.dims[0]
andself.dims[1]
= 28.I have been attempting to follow the DynapCNN guide to now map this to the devkit, but am unable to get it to work. Here is the modified
evaluate
function I have tried, which includes adding in a 'dummy' convolution in order to be able to use theDynapCNN
backend (and get the model to convert):Running this I get the following error:
I know that the chip works and I have access to it, because following the MNIST quick-start in the DynapCNN guide works for me:
From this I have a couple of questions:
nn.Sequential
to run it on chip.Much appreciated and thanks in advance!