synsense / sinabs

A deep learning library for spiking neural networks which is based on PyTorch, focuses on fast training and supports inference on neuromorphic hardware.
https://sinabs.readthedocs.io
GNU Affero General Public License v3.0
80 stars 8 forks source link

Difficulty mapping custom SNN to speck2fdevkit #118

Closed AdamDHines closed 11 months ago

AdamDHines commented 11 months ago

I'm in the process of attempting to convert a pre-existing torch network to run on the speck2fdevkit and am running into some trouble mapping it onto the device. The full code to my network can be found here.

My network is pre-trained and is a basic ANN multiplying inputs by weights across 2 nn.Linear layers (I don't perform any convolution in my nn.Sequential. I was able to succesfully convert the model to a sinabs model and run a basic inferencing model on my GPU with the following code:

def evaluate(self, model, test_loader, layers=None):
    """
    Run the inferencing model and calculate the accuracy.

    :param test_loader: Testing data loader
    :param layers: Layers to pass data through
    """
    # Initialize the tqdm progress bar
    pbar = tqdm(total=self.num_places,
                desc="Running the test network",
                position=0)

    self.inference = nn.Sequential(
        self.feature_layer.w,
        nn.ReLU(),
        self.output_layer.w,
    )

    input_shape = (1, self.dims[0] * self.dims[1])
    self.sinabs_model = from_model(
                            self.inference, 
                            input_shape=input_shape,
                            batch_size=1,
                            add_spiking_output=True,
                            synops=False,
                            )
    # Initiliaze the output spikes variable
    out = []
    # Run inference for the specified number of timesteps
    for spikes, labels in test_loader:
        # Set device
        spikes, labels = spikes.to(self.device), labels.to(self.device)
        spikes = sl.FlattenTime()(spikes)
        # Forward pass
        spikes = self.forward(spikes)
        output = spikes.sum(dim=0).squeeze()
        # Add output spikes to list
        out.append(output.detach().cpu().tolist())
        pbar.update(1)

def forward(self, spikes):
    """
    Compute the forward pass of the model.

    Parameters:
    - spikes (Tensor): Input spikes.

    Returns:
    - Tensor: Output after processing.
    """

    spikes = self.sinabs_model(spikes)

    return spikes

This gives me an expected output, albeit not as accurate when compared to running not as an SNN but as an ANN - but it runs and I can get meaningful output by using spike rasters in this model. My input is a [1, 784] tensor where self.dims[0] and self.dims[1] = 28.

I have been attempting to follow the DynapCNN guide to now map this to the devkit, but am unable to get it to work. Here is the modified evaluate function I have tried, which includes adding in a 'dummy' convolution in order to be able to use the DynapCNN backend (and get the model to convert):

def evaluate(self, model, test_loader, layers=None):
    """
    Run the inferencing model and calculate the accuracy.

    :param test_loader: Testing data loader
    :param layers: Layers to pass data through
    """
    # Initialize the tqdm progress bar
    pbar = tqdm(total=self.num_places,
                desc="Running the test network",
                position=0)
    in_channels = out_channels = 1
    #nn.init.eye_(self.inert_conv_layer.weight)
    self.inference = nn.Sequential(
        nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, bias=False),
        nn.ReLU(),
        self.feature_layer.w,
        nn.ReLU(),
        self.output_layer.w,
    )

    input_shape = (1, 1, self.dims[0] * self.dims[1])
    self.sinabs_model = from_model(
                            self.inference, 
                            input_shape=input_shape,
                            batch_size=1,
                            add_spiking_output=True,
                            )

    dynapcnn = DynapcnnNetwork(snn=self.sinabs_model, 
                                input_shape=input_shape, 
                                discretize=True, 
                                dvs_input=False)
    devkit_name = "speck2fdevkit"

    # use the `to` method of DynapcnnNetwork to deploy the SNN to the devkit
    dynapcnn.to(device=devkit_name, chip_layers_ordering="auto")

Running this I get the following error:

./mambaforge-pypy3/envs/sinabs/lib/python3.11/site-packages/sinabs/backend/dynapcnn/mapping.py", line 185, in recover_mapping
    raise ValueError("No valid mapping found")
ValueError: No valid mapping found

I know that the chip works and I have access to it, because following the MNIST quick-start in the DynapCNN guide works for me:

Network is valid
speck2fdevkit:0
{'speck2fdevkit:0': device::DeviceInfo(serial_number=, usb_bus_number=2, usb_device_address=4, logic_version=0, device_type_name=Speck2fDevKit)}
The SNN is deployed on the core: [0, 1, 2, 3]

From this I have a couple of questions:

Much appreciated and thanks in advance!

sheiksadique commented 11 months ago

Hello, the error message indicates that the memory requirements of your model might not be compatible with those available on the chip.

AdamDHines commented 11 months ago

Thanks, it was indeed the model size was too big - I wasn't fully aware of the memory restraints. I significantly reduced my model size and was able to run an inference on it.