ArcFace unofficial Implemented in Tensorflow 2.0+ (ResNet50, MobileNetV2). "ArcFace: Additive Angular Margin Loss for Deep Face Recognition" Published in CVPR 2019. With Colab.
In modules/models.py backbones are loaded without pretrained classification head (include_top=False) and then custom OutputLayer is added on the top. Ignoring pretrained classifier means cutting off GlobalAveragePooling layer, but OutputLayer doesn't contain it.
I propose something like this:
def OutputLayer(embd_shape, w_decay=5e-4, name='OutputLayer'):
def output_layer(x_in):
x = inputs = Input(x_in.shape[1:])
x = BatchNormalization()(x) # maybe this layer is redundunt
x = GlobalAveragePooling2D()(x)
x = Dropout(rate=0.5)(x)
x = Flatten()(x)
x = Dense(embd_shape, kernel_regularizer=_regularizer(w_decay))(x)
x = BatchNormalization()(x)
model = Model(inputs, x, name=name)
return model(x_in)
return output_layer
An effect of loosing GlobalAveragePooling is increasing backbone MobileNetV2 in size from 12 MB to 50 MB, but increasing in accuracy too, although for training MobileNetV2 must be used other hyperparameters which will increase val accuracy.
In modules/models.py backbones are loaded without pretrained classification head (include_top=False) and then custom OutputLayer is added on the top. Ignoring pretrained classifier means cutting off GlobalAveragePooling layer, but OutputLayer doesn't contain it.
I propose something like this:
An effect of loosing GlobalAveragePooling is increasing backbone MobileNetV2 in size from 12 MB to 50 MB, but increasing in accuracy too, although for training MobileNetV2 must be used other hyperparameters which will increase val accuracy.