Open dalalkrish opened 3 years ago
@dalalkrish I know this use case. You can have a submodel implemented of the autoencoder, with the encode and decode function. In this way, you don't have to return two models, but only one autoencoder model. You can call the model's member function to do the encoding and decoding.
class Autoencoder(tf.keras.Model):
...
def call(self, inputs):
return self.decode(self.encode(inputs))
def encode(self, input):
...
return output
def decode(self, input):
...
return output
class MyTuner(kt.Tuner):
def run_trial(self, trial,*args,**kwargs):
# You can add additional HyperParameters for preprocessing and custom training loops
# via overriding `run_trial`
hp = trial.hyperparameters
model = self.hypermodel.build(hp)
batch_size = hp.Int('batch_size', 4, 5, step=1)
super(MyTuner, self).run_trial(trial, *args, **kwargs)
def build_model(hp):
model = MyModel(flag = False,
units_1 = hp.Int('units_1', 256, 512, default=128),
units_2 = hp.Int('units_2', 1, 3, default=3),
rate_1 = hp.Float('rate_1', 0, 1, sampling = 'linear', default=0.4),
rate_2 = hp.Float('rate_2', 0, 1, sampling = 'linear', default=0.3),
k = hp.Int('dimension', 60, 100, default=80)
)
model.compile(optimizer=tf.keras.optimizers.SGD(),loss="binary_crossentropy",metrics=["acc"])
return model
where MyModel has been implemented by subclassing keras models (tf.keras.Model)
accuracy = []
precision = []
recall = []
auroc = []
for fold, (train_index, test_index) in enumerate(skf.split(X, y)):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
X_train, X_eval, y_train, y_eval = train_test_split(X_train, y_train, test_size=0.2, random_state=42)
print("Currently on fold: {}".format(fold))
train_data = BertGenerator(X_train,y_train,batch_size= 6)
eval_data = BertGenerator(X_eval,y_eval,batch_size = 6)
tuner = MyTuner(oracle=kt.oracles.BayesianOptimization(objective=kt.Objective('val_loss', 'min'),max_trials=20),
hypermodel=build_model,
directory='./',
project_name='autoencoders')
print(tuner.search_space_summary())
tuner.search(train_data, epochs = 1000, validation_data = eval_data, class_weight = class_weight_function(y_train),
callbacks=[tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=4)])
print(tuner.results_summary())
This code runs for 1 epoch and then the following error raises: AttributeError: 'NoneType' object has no attribute 'replace'. I am not able to understand the reason for this error. Thank you in advance
I'm getting following error and I'm not able to figure out why:
I have read the answers here and here which seem to telling to import
keras
fromtensorflow
instead of stand alonekeras
which I'm doing but still getting the error. I would very much appreciate your help in figuring this out. Below is my entire code: