NeuromorphicProcessorProject / snn_toolbox

Toolbox for converting analog to spiking neural networks (ANN to SNN), and running them in a spiking neuron simulator.
MIT License
360 stars 104 forks source link

Parsing layer Flatten raises a KeyError #107

Closed annahambi closed 2 years ago

annahambi commented 3 years ago

During the SNN conversion of the model as defined below (see Extract from the Model Definition) the following key error (see Error) is raised on layer Flatten. Do you have any idea where I am going wrong?

Error

Initializing INI simulator...

Loading data set from '.npz' files in *took path info out*/Experiments_SNN_Toolbox/04_data.

vgg_version_007
Pytorch model was successfully ported to ONNX.

Unable to use `same` padding. Add ZeroPadding2D layer to fix shapes.
Unable to use `same` padding. Add ZeroPadding2D layer to fix shapes.
Unable to use `same` padding. Add ZeroPadding2D layer to fix shapes.
Unable to use `same` padding. Add ZeroPadding2D layer to fix shapes.
Unable to use `same` padding. Add ZeroPadding2D layer to fix shapes.

ONNX model was successfully ported to Keras.
Evaluating input model on 100 samples...
Top-1 accuracy: 11.00%
Top-5 accuracy: 49.00%

Parsing input model...
Skipping layer InputLayer.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer MaxPooling2D.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer MaxPooling2D.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer MaxPooling2D.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer MaxPooling2D.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer ZeroPadding2D.
Parsing layer Conv2D.
Using activation relu.
Skipping layer Activation.
Parsing layer MaxPooling2D.
Skipping layer Lambda.
Parsing layer Flatten.

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-4-9a0dbe3973d1> in <module>
     52 ###################
     53 
---> 54 main(config_filepath)

~/miniconda3/envs/tf20201215b/lib/python3.8/site-packages/snntoolbox/bin/run.py in main(filepath)
     29     if filepath is not None:
     30         config = update_setup(filepath)
---> 31         run_pipeline(config)
     32         return
     33 

~/miniconda3/envs/tf20201215b/lib/python3.8/site-packages/snntoolbox/bin/utils.py in run_pipeline(config, queue)
     86         print("Parsing input model...")
     87         model_parser = model_lib.ModelParser(input_model['model'], config)
---> 88         model_parser.parse()
     89         parsed_model = model_parser.build_parsed_model()
     90 

~/miniconda3/envs/tf20201215b/lib/python3.8/site-packages/snntoolbox/parsing/utils.py in parse(self)
    244                 inserted_flatten = False
    245             else:
--> 246                 inbound = self.get_inbound_names(layer, name_map)
    247 
    248             attributes = self.initialize_attributes(layer)

~/miniconda3/envs/tf20201215b/lib/python3.8/site-packages/snntoolbox/parsing/utils.py in get_inbound_names(self, layer, name_map)
    409             return [self.input_layer_name]
    410         else:
--> 411             inb_idxs = [name_map[str(id(inb))] for inb in inbound]
    412             return [self._layer_list[i]['name'] for i in inb_idxs]
    413 

~/miniconda3/envs/tf20201215b/lib/python3.8/site-packages/snntoolbox/parsing/utils.py in <listcomp>(.0)
    409             return [self.input_layer_name]
    410         else:
--> 411             inb_idxs = [name_map[str(id(inb))] for inb in inbound]
    412             return [self._layer_list[i]['name'] for i in inb_idxs]
    413 

KeyError: '139874875395568'

Extract from the Model Definition

class Model(nn.Module):

    def __init__(self,
                 ipt_size=(32, 32), 
                 pretrained=False, 
                 vgg_type='vgg16', 
                 num_classes=10):
        super(Model, self).__init__()

        print('vgg_version_008')

        # The input_shape field is required by SNN toolbox.
        self.input_shape = (3, 32, 32)

        # load convolutional part of vgg
        assert vgg_type in VGG_TYPES, "Unknown vgg_type '{}'".format(vgg_type)
        vgg_loader = VGG_TYPES[vgg_type]
        vgg = vgg_loader(pretrained=pretrained)
        self.features = vgg.features

        # init fully connected part of vgg
        test_ipt = Variable(torch.zeros(1,3,ipt_size[0],ipt_size[1]))
        test_out = vgg.features(test_ipt)
        self.n_features = test_out.size(1) * test_out.size(2) * test_out.size(3)
        self.flattener = nn.Sequential(nn.Flatten())
        self.classifier = nn.Sequential(nn.Linear(self.n_features, 4096),
                                        nn.ReLU(True),
                                        nn.Dropout(),
                                        nn.Linear(4096, 4096),
                                        nn.ReLU(True),
                                        nn.Dropout(),
                                        nn.Linear(4096, num_classes)
                                       )
        self._init_classifier_weights()

    def forward(self, x):
        x = self.features(x)
        # original: x = x.view(x.size(0), -1)
        # changed to:
        #x = x.flatten(start_dim=1)
        x = self.flattener(x)
        x = self.classifier(x)
        return x
rbodo commented 3 years ago

It seems the onnx2keras tool inserts a Lambda layer before the Flatten layer when porting your model from pytorch to onnx to keras. The toolbox can't parse this Lambda layer. Maybe try to find out why this layer is added and how you can prevent it.

annahambi commented 3 years ago

Hi @rbodo, thanks for the fast response! I think the following is happening:

I don't know how to resolve this or how one could prevent onnx2keras from inserting the additional layer.

rbodo commented 3 years ago

Can you check the console output from onnx2keras? If I remember correctly there should be a message indicating why Lambda was added.

annahambi commented 3 years ago

Hi @rbodo, I have changed the issue above to show the full output when I try to run the conversion. There is no information from the onnx2keras as far as I can see...

rbodo commented 3 years ago

Hmm, there might be a verbosity setting in onnx2keras now. Regardless - I won't be able to help debug / fix the underlying issue, but if you are not bound to using pytorch then a quick workaround would be to define your model in Keras directly. Sorry I can't be of more help!