In my dataset number of train images with class "0" is 3828 and number of train images with class "1" is 3740, and number of validation photos is 379. Using model is:
def baseline_model():
input_1 = Input(shape=(224, 224, 3))
input_2 = Input(shape=(224, 224, 3))
base_model = VGGFace(model='resnet50', include_top=False)
for x in base_model.layers[:-3]:
x.trainable = True
x1 = base_model(input_1)
x2 = base_model(input_2)
x1 = Concatenate(axis=-1)([GlobalMaxPool2D()(x1), GlobalAvgPool2D()(x1)])
x2 = Concatenate(axis=-1)([GlobalMaxPool2D()(x2), GlobalAvgPool2D()(x2)])
x3 = Subtract()([x1, x2])
x3 = Multiply()([x3, x3])
x1_ = Multiply()([x1, x1])
x2_ = Multiply()([x2, x2])
x4 = Subtract()([x1_, x2_])
x = Concatenate(axis=-1)([x4, x3])
x = Dense(100, activation="relu")(x)
x = Dropout(0.01)(x)
out = Dense(1, activation="sigmoid")(x)#softmax
model = Model([input_1, input_2], out)
model.compile(loss="binary_crossentropy", optimizer=Adam(0.00001) , metrics=['accuracy']) # metrics=[f1_m, precision_m, recall_m]
model.summary()
return model
The result is:
loss: 3.0981 - accuracy: 0.9739 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Why val_loss and val_accuracy stuck at zero?
In Data_generator function I convert each batch of images to numpy array :
x_batch = np.array(x_batch)
x_batch1 = np.array(x_batch1)
y_batch = np.array(y[idd])
yield [x_batch,x_batch1], y_batch
In my dataset number of train images with class "0" is 3828 and number of train images with class "1" is 3740, and number of validation photos is 379. Using model is:
def baseline_model():
The result is: loss: 3.0981 - accuracy: 0.9739 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Why val_loss and val_accuracy stuck at zero?
In Data_generator function I convert each batch of images to numpy array : x_batch = np.array(x_batch) x_batch1 = np.array(x_batch1) y_batch = np.array(y[idd]) yield [x_batch,x_batch1], y_batch
How can I solve this problem?