I am trying to optimize an autoencoder such that It also reproduces the calculated committor values simultaneously. The code looks like this:
encoder_input=keras.Input(shape=(ncv,))
xencode=keras.layers.Dense(hidden_layers_encoder[0],activation='linear')(encoder_input)
for i in hidden_layers_encoder[1:]:
xencode=keras.layers.Dense(i,activation='linear')(xencode)
xencode=keras.layers.Dropout(0.1)(xencode)
encoder_output=keras.layers.Dense(n_bottleneck,activation='linear')(xencode)
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
decoder_input=keras.layers.Dense(hidden_layers_decoder[0],activation='tanh'(encoder_output)
xdecode=decoder_input
xdecode=keras.layers.Dropout(0.1)(xdecode)
for j in range(nhid-1):
xdecode=keras.layers.Dense(hidden_layers_decoder[j+1],activation='tanh')(xdecode)
xdecode=keras.layers.Dropout(0.1)(xdecode)
decoder_output=keras.layers.Dense(ncv,activation='linear',name='decoder')(xdecode)
opt = keras.optimizers.Adam(lr=0.001)
auto_encoder = keras.Model(encoder_input, decoder_output, name="auto-encoder")
pb_input=keras.layers.Dense(hidden_layers_decoder[0],activation='sigmoid')(encoder_output)
pb_cal=keras.layers.Dropout(0.1)(pb_input)
for k in range(nhid-1):
pb_cal=keras.layers.Dense(hidden_layers_decoder[j+1],activation='sigmoid')(pb_cal)
pb_cal=keras.layers.Dropout(0.1)(pb_cal)
pb_output=keras.layers.Dense(npb,activation='sigmoid',name='pb_decoder')(encoder_output)
pbcoder = keras.Model(encoder_input, pb_output, name="pbcoder")
auto_encoder_pb = keras.Model(inputs=encoder_input, outputs=[decoder_output, pb_output], name="auto-encoder-pb")
auto_encoder_pb.compile(optimizer=opt,loss=['mse','mse'],metrics=['accuracy'])
history=auto_encoder_pb.fit(x_train, [x_train, y_train],validation_data=(x_test, [x_test, y_test]),batch_size=500,epochs=500)
The input dimension is 14 and I have used four hidden layers in all the cases with 56 neurons each. I have varied the dimension of the bottleneck from 1 to 8. I have thoroughly checked my data file to make sure that there are no NAN/Inf values. But while fitting it, is giving me :
Epoch 1/500
143/143 [==============================] - 1s 10ms/step - loss: nan - decoder_loss: nan - pb_decoder_loss: nan - decoder_accuracy: 0.0310 - pb_decoder_accuracy: 0.5448 - val_loss: nan - val_decoder_loss: nan - val_pb_decoder_loss: nan - val_decoder_accuracy: 0.0311 - val_pb_decoder_accuracy: 0.5421
Epoch 2/500
143/143 [==============================] - 1s 7ms/step - loss: nan - decoder_loss: nan - pb_decoder_loss: nan - decoder_accuracy: 0.0307 - pb_decoder_accuracy: 0.5448 - val_loss: nan - val_decoder_loss: nan - val_pb_decoder_loss: nan - val_decoder_accuracy: 0.0311 - val_pb_decoder_accuracy: 0.5421
Epoch 3/500
143/143 [==============================] - 1s 8ms/step - loss: nan - decoder_loss: nan - pb_decoder_loss: nan - decoder_accuracy: 0.0307 - pb_decoder_accuracy: 0.5448 - val_loss: nan - val_decoder_loss: nan - val_pb_decoder_loss: nan - val_decoder_accuracy: 0.0311 - val_pb_decoder_accuracy: 0.5421
Subject of the issue
I am trying to optimize an autoencoder such that It also reproduces the calculated committor values simultaneously. The code looks like this:
The input dimension is 14 and I have used four hidden layers in all the cases with 56 neurons each. I have varied the dimension of the bottleneck from 1 to 8. I have thoroughly checked my data file to make sure that there are no NAN/Inf values. But while fitting it, is giving me :
How can I fix this?