Open MAD-hav-KGP opened 5 months ago
Hi Madhav, can you please attach the code of your VGG model?
Hi, Thank you so much for responding. Here's the code for our model : `class build_model: def init(self,train=True): self.num_classes = 10 self.weight_decay = 0.0001 self.x_shape = [32,32,3]
self.model = self.build_model()
print('train = ',train)
if train:
self.model = self.train(self.model)
else:
print('Loading pretrained weights...')
self.model.load_weights('cnn_mdl.h5')
def extract_model(self):
return self.model
def build_model(self):
use_bias = True
model = Sequential()
weight_decay = self.weight_decay
model.add(Conv2D(64, (3, 3), padding='same',use_bias=use_bias,
input_shape=self.x_shape,kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Conv2D(64, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(AveragePooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Conv2D(128, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(AveragePooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Conv2D(256, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Conv2D(256, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(AveragePooling2D(pool_size=(2, 2)))
model.add(Conv2D(512, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(AveragePooling2D(pool_size=(2, 2)))
model.add(Conv2D(512, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same',use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(AveragePooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(512,use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(self.num_classes,use_bias=use_bias,
kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('softmax'))
return model
`
The model looks good. Can you let me know which TensorFlow version you are using? Spkeras has been tested with TensorFlow versions 2.3 to 2.5, and we cannot guarantee compatibility with versions outside this range.
Hi, We're using Tf version 2.3.1, so hoping it isn't a version issue. Just that this bitwidth issue seemed a bit strange😅
Thanks and regards, Madhav
I also faced the same problem (after using TF version 2.3.1). How to solve this issue? Is there any this else @Dengyu-Wu you tried? In your system how much you are getting accuracy after making signed_bit is set to anything other than 0. Kindly help to solve the issue @Dengyu-Wu.
Hello @Dengyu-Wu , We are attempting to convert VGG16 network to snn, but are getting low accuracies whenever signed_bit is set to anything other than 0 (including 32, all other params unchanged). Please let us know if you've faced this issue before, or if it could be something on our end.
Output for Signed_bit = 0 :
{'timesteps': 256, 'thresholding': 0.5, 'amp_factor': 100, 'signed_bit': 0, 'spike_ext': 0, 'epsilon': 0.001, 'use_bias': True, 'scaling_factor': 1, 'noneloss': False, 'method': 1} 313/313 [==============================] - 35s 111ms/step - loss: 1624.8730 - accuracy: 0.7886
Output for Signed_bit = 32 :
{'timesteps': 256, 'thresholding': 0.5, 'amp_factor': 100, 'signed_bit': 32, 'spike_ext': 0, 'epsilon': 0.001, 'use_bias': True, 'scaling_factor': 1, 'noneloss': False, 'method': 1} 313/313 [==============================] - 39s 125ms/step - loss: 375521.1562 - accuracy: 0.1000
Thanks and regards, Madhav