Dear devs,
I want to use tune-sklearn with my predefined dataset in Keras CNN model. I using similar pattern like GridSearchCV but instead rely on loss and accuracy, i want to use loss or accuracy from validation set and make my code like this:
And define my predefined validation set like this:
def folding_maker(train,valid):
t = [train,valid]
t = pd.concat(t)
tY = t.pop('CLASS').to_numpy()
tY = to_categorical(tY)
t = t.drop(columns=['RECORD_NAME','Minute']).to_numpy()
t = np.expand_dims(t,axis=-1)
folded = [-1 if x in train.index else 0 for x in valid.index]
return t,tY,folded
however when i change the monitor to val_loss it gives me error:
WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
and when i set it to loss, it works, but the updated value always reseted to -inf following by tracing error:
Epoch 00001: loss improved from inf to 0.56859, saving model to XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
WARNING:tensorflow:6 out of the last 11 calls to <function Model.make_test_function.<locals>.test_function at 0x7f8fa00df6a8> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
2/2 [==============================] - 0s 69ms/step - loss: 0.0000e+00 - accuracy: 1.0000
The `start_trial` operation took 2.656 s, which may be a performance bottleneck.
5/5 [==============================] - 2s 239ms/step - loss: 0.9378 - accuracy: 0.7908
Epoch 00001: loss improved from inf to 0.47824, saving model to XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
WARNING:tensorflow:6 out of the last 11 calls to <function Model.make_test_function.<locals>.test_function at 0x7f9176998bf8> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
2/2 [==============================] - 0s 58ms/step - loss: 0.0000e+00 - accuracy: 1.0000
The `start_trial` operation took 2.662 s, which may be a performance bottleneck.
My validation data consist of 3 record which contain 30 data for each class (A and B). so by two class it must be 60 data for validation.
Dear devs, I want to use tune-sklearn with my predefined dataset in Keras CNN model. I using similar pattern like GridSearchCV but instead rely on loss and accuracy, i want to use loss or accuracy from validation set and make my code like this:
And define my predefined validation set like this:
however when i change the monitor to val_loss it gives me error:
and when i set it to loss, it works, but the updated value always reseted to -inf following by tracing error:
My validation data consist of 3 record which contain 30 data for each class (A and B). so by two class it must be 60 data for validation.