NeuromorphicProcessorProject / snn_toolbox

Toolbox for converting analog to spiking neural networks (ANN to SNN), and running them in a spiking neuron simulator.
MIT License
360 stars 104 forks source link

Key Error #135

Closed Zauze closed 1 year ago

Zauze commented 1 year ago

Hi guys, I am currently trying to convert the tiny-Yolov3 to an SNN (without simulation etc.) but I am running into the key error equally to #82 .

My converted architecture looks like this:

 Layer (type)                   Output Shape         Param #     Connected to
==================================================================================================
 input_1 (InputLayer)           [(None, None, None,  0           []
                                 3)]

 conv2d_1 (Conv2D)              (None, None, None,   432         ['input_1[0][0]']
                                16)

 batch_normalization_1 (BatchNo  (None, None, None,   64         ['conv2d_1[0][0]']
 rmalization)                   16)

 leaky_re_lu_1 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_1[0][0]']
                                16)

 max_pooling2d_1 (MaxPooling2D)  (None, None, None,   0          ['leaky_re_lu_1[0][0]']
                                16)

 conv2d_2 (Conv2D)              (None, None, None,   4608        ['max_pooling2d_1[0][0]']
                                32)

 batch_normalization_2 (BatchNo  (None, None, None,   128        ['conv2d_2[0][0]']
 rmalization)                   32)

 leaky_re_lu_2 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_2[0][0]']
                                32)

 max_pooling2d_2 (MaxPooling2D)  (None, None, None,   0          ['leaky_re_lu_2[0][0]']
                                32)

 conv2d_3 (Conv2D)              (None, None, None,   18432       ['max_pooling2d_2[0][0]']
                                64)

 batch_normalization_3 (BatchNo  (None, None, None,   256        ['conv2d_3[0][0]']
 rmalization)                   64)

 leaky_re_lu_3 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_3[0][0]']
                                64)

 max_pooling2d_3 (MaxPooling2D)  (None, None, None,   0          ['leaky_re_lu_3[0][0]']
                                64)

 conv2d_4 (Conv2D)              (None, None, None,   73728       ['max_pooling2d_3[0][0]']
                                128)

 batch_normalization_4 (BatchNo  (None, None, None,   512        ['conv2d_4[0][0]']
 rmalization)                   128)

 leaky_re_lu_4 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_4[0][0]']
                                128)

 max_pooling2d_4 (MaxPooling2D)  (None, None, None,   0          ['leaky_re_lu_4[0][0]']
                                128)

 conv2d_5 (Conv2D)              (None, None, None,   294912      ['max_pooling2d_4[0][0]']
                                256)

 batch_normalization_5 (BatchNo  (None, None, None,   1024       ['conv2d_5[0][0]']
 rmalization)                   256)

 leaky_re_lu_5 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_5[0][0]']
                                256)

 max_pooling2d_5 (MaxPooling2D)  (None, None, None,   0          ['leaky_re_lu_5[0][0]']
                                256)

 conv2d_6 (Conv2D)              (None, None, None,   1179648     ['max_pooling2d_5[0][0]']
                                512)

 batch_normalization_6 (BatchNo  (None, None, None,   2048       ['conv2d_6[0][0]']
 rmalization)                   512)

 leaky_re_lu_6 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_6[0][0]']
                                512)

 max_pooling2d_6 (MaxPooling2D)  (None, None, None,   0          ['leaky_re_lu_6[0][0]']
                                512)

 conv2d_7 (Conv2D)              (None, None, None,   4718592     ['max_pooling2d_6[0][0]']
                                1024)

 batch_normalization_7 (BatchNo  (None, None, None,   4096       ['conv2d_7[0][0]']
 rmalization)                   1024)

 leaky_re_lu_7 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_7[0][0]']
                                1024)

 conv2d_8 (Conv2D)              (None, None, None,   262144      ['leaky_re_lu_7[0][0]']
                                256)

 batch_normalization_8 (BatchNo  (None, None, None,   1024       ['conv2d_8[0][0]']
 rmalization)                   256)

 leaky_re_lu_8 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_8[0][0]']
                                256)

 conv2d_11 (Conv2D)             (None, None, None,   32768       ['leaky_re_lu_8[0][0]']
                                128)

 batch_normalization_10 (BatchN  (None, None, None,   512        ['conv2d_11[0][0]']
 ormalization)                  128)

 leaky_re_lu_10 (LeakyReLU)     (None, None, None,   0           ['batch_normalization_10[0][0]']
                                128)

 up_sampling2d_1 (UpSampling2D)  (None, None, None,   0          ['leaky_re_lu_10[0][0]']
                                128)

 concatenate_1 (Concatenate)    (None, None, None,   0           ['up_sampling2d_1[0][0]',
                                384)                              'leaky_re_lu_5[0][0]']

 conv2d_9 (Conv2D)              (None, None, None,   1179648     ['leaky_re_lu_8[0][0]']
                                512)

 conv2d_12 (Conv2D)             (None, None, None,   884736      ['concatenate_1[0][0]']
                                256)

 batch_normalization_9 (BatchNo  (None, None, None,   2048       ['conv2d_9[0][0]']
 rmalization)                   512)

 batch_normalization_11 (BatchN  (None, None, None,   1024       ['conv2d_12[0][0]']
 ormalization)                  256)

 leaky_re_lu_9 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_9[0][0]']
                                512)

 leaky_re_lu_11 (LeakyReLU)     (None, None, None,   0           ['batch_normalization_11[0][0]']
                                256)

 conv2d_10 (Conv2D)             (None, None, None,   130815      ['leaky_re_lu_9[0][0]']
                                255)

 conv2d_13 (Conv2D)             (None, None, None,   65535       ['leaky_re_lu_11[0][0]']
                                255)

==================================================================================================
Total params: 8,858,734
Trainable params: 8,852,366
Non-trainable params: 6,368

The error occurs when handling max_pooling2d_1 inside the parse function of snntoolbox/parsing/utils.py => I suppose, because the previous layer leaky_re_lu_1 is skipped and not added to the name_map variable and then the self.get_inbound_names function is called with the pooling layer. Its inbound layers are only the relu and that is not inside the name_map. The only layer that is inside the name_map dict is the conv2d_1.

I am using the development version of the toolbox (0.6.0) inside a conda environment (but installed with pip activated environment).

tensorflow version 2.9.1

Here the full stack trace:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In[12], line 1
----> 1 snntoolbox.bin.run.main('config')

File c:\Users\Zauze\Anaconda3\envs\snn\lib\site-packages\snntoolbox\bin\run.py:31, in main(filepath)
     29 if filepath is not None:
     30     config = update_setup(filepath)
---> 31     run_pipeline(config)
     32     return
     34 parser = argparse.ArgumentParser(
     35     description='Run SNN toolbox to convert an analog neural network into '
     36                 'a spiking neural network, and optionally simulate it.')

File c:\Users\Zauze\Anaconda3\envs\snn\lib\site-packages\snntoolbox\bin\utils.py:88, in run_pipeline(config, queue)       
     86 print("Parsing input model...")
     87 model_parser = model_lib.ModelParser(input_model['model'], config)
---> 88 model_parser.parse()
     89 parsed_model = model_parser.build_parsed_model()
     91 # ___________________________ NORMALIZE _____________________________ #

File c:\Users\Zauze\Anaconda3\envs\snn\lib\site-packages\snntoolbox\parsing\utils.py:246, in AbstractModelParser.parse(self)
    244     inserted_flatten = False
    245 else:
--> 246     inbound = self.get_inbound_names(layer, name_map)
    248 attributes = self.initialize_attributes(layer)
    250 attributes.update({'layer_type': layer_type,
    251                    'name': self.get_name(layer, idx),
    252                    'inbound': inbound})

nd_names(self, layer, name_map)
    409     return [self.input_layer_name]
    410 else:
--> 411     inb_idxs = [name_map[str(id(inb))] for inb in inbound]
    412     return [self._layer_list[i]['name'] for i in inb_idxs]

File c:\Users\Zauze\Anaconda3\envs\snn\lib\site-packages\snntoolbox\parsing\utils.py:411, in <listcomp>(.0)
    409     return [self.input_layer_name]
    410 else:
--> 411     inb_idxs = [name_map[str(id(inb))] for inb in inbound]
    412     return [self._layer_list[i]['name'] for i in inb_idxs]

KeyError: '1590468567024'

Config file is attached as txt below.

config.txt

rbodo commented 1 year ago

You are probably right about the error diagnosis. Leaky relu is skipped because it is not compatible with the current interpretation of rate-encoded spiking neurons. A negative activation would require neurons to fire at a negative rate. You can work around this issue by replacing the leaky relu with a normal relu, which will likely require retraining. Or come up with a neuron encoding for negative activation values.