maxpumperla / hyperas

Keras + Hyperopt: A very simple wrapper for convenient hyperparameter optimization
http://maxpumperla.com/hyperas/
MIT License
2.17k stars 316 forks source link

Regarding best_run #201

Open BenLim88 opened 5 years ago

BenLim88 commented 5 years ago

I am running hyperas for a regression task, with 5 outputs. I managed to set up the hyperas syntax and am able to run it.

During the iterations, I noticed my training_loss was as low as 0.8 and val_loss 0.7. The main part of the code:

`history = model.fit(X_train, Y_train, batch_size={{choice([64,128])}}, epochs={{choice([200])}}, verbose=2, validation_data=(X_val, Y_val), callbacks=callbacks_list)

score, acc = model.evaluate(X_val, Y_val, verbose=1)
print('Final validation accuracy:', acc)
return {'loss': -acc, 'status': STATUS_OK, 'model': model,'history.val_loss':history.history['val_loss'], 'history.loss': history.history['loss']}

if name == 'main': trials=Trials() best_run, best_model, space = optim.minimize(model=create_model, data=data, algo=tpe.suggest, max_evals=30, trials=trials, eval_space=True, return_space=True)`

And then, in order to check the training error of the best_model, I did this:

train_loss, train_score = best_model.evaluate(X_train, Y_train, verbose=0) print(train_loss, train_score) val_loss, val_score = model.evaluate(x_val, y_val, verbose=0) print(val_loss, val_score) test_loss, test_score = model.evaluate(x_test, y_test, verbose=0) print(test_loss, test_score)

However, it gave me a really high value for training loss, that is 570.14, val_loss 570.07 and test_loss 573.39

Any idea what am I doing wrong here? That is, why are the values so large for the losses of the best_model?