rcmalli / keras-vggface

VGGFace implementation with Keras Framework
MIT License
928 stars 417 forks source link

Finetuning with VGG16, val_loss and val_acc remain constant #43

Closed seppestaes closed 5 years ago

seppestaes commented 6 years ago

Library versions Tensorflow 1.5.1 Keras 2.2.2 keras_vggface 0.5

Bug reports: Finetuning vggface (VGG16) using UTK dataset subset Softmax Multiclass Logistic Regression

val_loss never improves val_acc remains constant

Code Sample:

nb_class = 7
hidden_dim = 512

logging.debug("Loading data...")
image, gender, age, _, image_size, _ = load_data(input_path)
X_data = image
y_data_a = np_utils.to_categorical(age, 7)

vgg_model = VGGFace(include_top=False, input_shape=(224, 224, 3))

for layer in vgg_model.layers:
    layer.trainable = False

last_layer = vgg_model.get_layer('pool5').output
x = Flatten(name='flatten')(last_layer)
x = Dense(hidden_dim, activation='relu', name='fc6')(x)
x = Dense(hidden_dim, activation='relu', name='fc7')(x)
out = Dense(nb_class, activation='softmax', name='fc8')(x)
model = Model(vgg_model.input, out)

sgd = SGD(lr=0.00001, momentum=0.9)
model.compile(optimizer=sgd, loss=["categorical_crossentropy"],
              metrics=['accuracy'])
...
hist = model.fit(X_train, y_train_a, batch_size=batch_size, epochs=nb_epochs, callbacks=callbacks,
                     validation_data=(X_test, y_test_a))
iamgroot42 commented 5 years ago

Your learning rate seems quite low. Did you face the same issue for higher learning rates?

mingix commented 5 years ago

Normalize your input image may help you solve this problem

rcmalli commented 5 years ago

It is more likely about your settings for the training. If you have trouble with the preprocessing step please have a look to testing code (test.py).