I still has some bug/incomprehension with the Scikeras and Sklearn fit function interaction. Last issue was done in a hurry and did not highlight the bug so here I am again.
So I have the impression that fitting a model wrapped in KerasClassifier in a loop will not reset its weights before fitting it to the data in the same manner as If the Keras model alone is instantiated in the loop before being fit.
In summary, It seems that the model is not correctly reset so fitting it in a loop (a KFold cross-validation loop).
I encountered this issue using Scikeras professionally (there, I had clearly models that gave me an accuracy of 0.6 in the first loop turn before skyrocketing to something close to 1. I tried to reproduce the behavior with standard dummy data but performances depends too much on the data quality so I wanted to highlight the issue by looking directly at the weight of biases (even if it less intuitive to me than sudden performance increase).
Here the 2 piece of codes using a dummy dataset and printing the weight of the model's bias:
Reinstantiate the model directly in the loop:
from numpy import loadtxt
import numpy as np
from scikeras.wrappers import KerasClassifier
from keras.models import Sequential
from keras.layers import Dense
from sklearn.metrics import accuracy_score
from sklearn.utils import shuffle
from sklearn.pipeline import make_pipeline
def model_nn():
model = Sequential()
model.add(Dense(12, input_shape=(8,), activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['binary_crossentropy'])
return model
# load the dataset
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=',')
model_nn = model_nn()
n_split = 10
X, y = dataset[:, 0:8], dataset[:, 8].astype(int)
num_runs = 30
all_scores = []
kf = KFold(n_splits=n_split, shuffle=True)
for train_index, eval_index in kf.split(X, y):
x_test, x_train = X[eval_index, :], X[train_index, :]
y_test, y_train = y[eval_index], y[train_index]
scores = []
score = 0
for nrun in range(num_runs):
x_, y_ = shuffle(x_train, y_train)
biais = model.layers[1].get_weights()[1]
print(f"Before: {biais}")
model_nn.fit(x_, y_, epochs=30, batch_size=16, verbose=False)
biais = model.layers[1].get_weights()[1]
print(f"After: {biais}")
y_pred = (model_nn.predict(x_test) > 0.5).astype(int)
score += np.mean(accuracy_score(y_test, y_pred))
score /= num_runs
all_scores.append(score)
print(f"{Mean accuracy: np.mean(all_scores)}")
Windows 10
Scikeras 0.10.0
TensorFlow 2.11.0
keras 2.11.0
Why do you think about these weights and the learned tendency that takes place in the 2nd case?
Do you know a way to see if the model is properly reset when fitting with KerasClassifier ?
Thank you in advance for your time
Hello,
I still has some bug/incomprehension with the Scikeras and Sklearn fit function interaction. Last issue was done in a hurry and did not highlight the bug so here I am again. So I have the impression that fitting a model wrapped in KerasClassifier in a loop will not reset its weights before fitting it to the data in the same manner as If the Keras model alone is instantiated in the loop before being fit.
In summary, It seems that the model is not correctly reset so fitting it in a loop (a KFold cross-validation loop). I encountered this issue using Scikeras professionally (there, I had clearly models that gave me an accuracy of 0.6 in the first loop turn before skyrocketing to something close to 1. I tried to reproduce the behavior with standard dummy data but performances depends too much on the data quality so I wanted to highlight the issue by looking directly at the weight of biases (even if it less intuitive to me than sudden performance increase).
Here the 2 piece of codes using a dummy dataset and printing the weight of the model's bias:
Reinstantiate the model directly in the loop:
Some of the logs showing the weights:
Using KerasClassifier :
Why do you think about these weights and the learned tendency that takes place in the 2nd case? Do you know a way to see if the model is properly reset when fitting with KerasClassifier ? Thank you in advance for your time