maxpumperla / hyperas

Keras + Hyperopt: A very simple wrapper for convenient hyperparameter optimization
http://maxpumperla.com/hyperas/
MIT License
2.18k stars 318 forks source link

hyperparameters and results from all trials #51

Open alexrisman opened 7 years ago

alexrisman commented 7 years ago

Hi, great library! I was just wondering, is there an easy way to save off the hyperparameter choices and best loss/accuracy from each of the trials that are done?

MatthiasWinkelmann commented 7 years ago

I use the following. The important bit of information is that space has all the hyperparameters.

   if 'results' not in globals():
      global results
      results = []

   result = model.fit(...
   valLoss = result.history['val_mean_absolute_error'][-1]
   parameters = space
   parameters["loss"] = valLoss
   parameters["time"] = str(int(time() - start)) + "sec"
   results.append(parameters)
   print(tabulate(data, headers=headers, tablefmt="fancy_grid", floatfmt=".8f"))
alexrisman commented 7 years ago

@MatthiasWinkelmann Thanks for the quick response! So, in the context of the "complete example" in the readme, would

result = model.fit(... valLoss = result.history['val_mean_absolute_error'][-1] parameters = space parameters["loss"] = valLoss parameters["time"] = str(int(time() - start)) + "sec" results.append(parameters)

go in the "model" function, and

print(tabulate(data, headers=headers, tablefmt="fancy_grid", floatfmt=".8f"))

go at the end of the script after "print(best_model.evaluate(X_test, Y_test))"?

MatthiasWinkelmann commented 7 years ago

@mthmn20 Almost! It all goes into the model function, around the model.fit(X_train...) in the example, which needs the added result =.

You'll also need to add from tabulate import tabulate at the top of the file if you want to use it.results is a simple list of dictionaries and tabulate allows to format it nicely for output, but you could do so differently to avoid the dependency.

The print(... line can go either into the model function, if you want to see the results after each run, or at the end of the file after optim.minimize if you only want to see the results after all has run.

The line also contained an error in my original post. It should be:

print(tabulate(results, headers="keys", tablefmt="fancy_grid", floatfmt=".8f"))

alexrisman commented 7 years ago

@MatthiasWinkelmann Thanks, I'll give that a whirl and let you know how it goes!

jdelange commented 7 years ago

@MatthiasWinkelmann This would be a great addition to the examples, but I think I missed someting as I get an error when implementing in the complete example. I don't know how @mthmn20 fared, but here's my attempt. I added the tabulate library. Also not sure how space would get all the hyperparameter info?

Code, modified part in model():

    # added to collect optimization results
    if 'results' not in globals():
        global results
        results = []

    # result added to collect results
    result = model.fit(X_train, Y_train,
                       batch_size={{choice([64, 128])}},
                       nb_epoch=1,
                       verbose=2,
                       validation_data=(X_test, Y_test))

    score, acc = model.evaluate(X_test, Y_test, verbose=0)
    print('Test accuracy:', acc)

    # added to collect results
    valLoss = result.history['val_mean_absolute_error'][-1]
    parameters = space
    parameters["loss"] = valLoss
    parameters["time"] = str(int(time() - start)) + "sec"
    results.append(parameters)

Code appended at __main__:

    # added to output results
    print(tabulate(results, headers="keys", tablefmt="fancy_grid", floatfmt=".8f"))

Error:

Train on 60000 samples, validate on 10000 samples Epoch 1/1 6s - loss: 1.7345 - acc: 0.3914 - val_loss: 0.6835 - val_acc: 0.8333 Test accuracy: 0.8333 Traceback (most recent call last): File "D:\Data\Essential\Programming\Python\Keras\HyperasTest\HyperasTest\Hyper asTest.py", line 96, in trials=Trials()) File "C:\Program Files\Anaconda2\lib\site-packages\hyperas\optim.py", line 42, in minimize notebook_name=notebook_name, verbose=verbose) File "C:\Program Files\Anaconda2\lib\site-packages\hyperas\optim.py", line 92, in base_minimizer rstate=np.random.RandomState(rseed)) File "C:\Program Files\Anaconda2\lib\site-packages\hyperopt\fmin.py", line 307 , in fmin return_argmin=return_argmin, File "C:\Program Files\Anaconda2\lib\site-packages\hyperopt\base.py", line 635 , in fmin return_argmin=return_argmin) File "C:\Program Files\Anaconda2\lib\site-packages\hyperopt\fmin.py", line 320 , in fmin rval.exhaust() File "C:\Program Files\Anaconda2\lib\site-packages\hyperopt\fmin.py", line 199 , in exhaust self.run(self.max_evals - n_done, block_until_done=self.async) File "C:\Program Files\Anaconda2\lib\site-packages\hyperopt\fmin.py", line 173 , in run self.serial_evaluate() File "C:\Program Files\Anaconda2\lib\site-packages\hyperopt\fmin.py", line 92, in serial_evaluate result = self.domain.evaluate(spec, ctrl) File "C:\Program Files\Anaconda2\lib\site-packages\hyperopt\base.py", line 840 , in evaluate rval = self.fn(pyllrval) File "D:\Data\Essential\Programming\Python\Keras\HyperasTest\HyperasTest\temp model.py", line 107, in keras_fmin_fnct KeyError: 'val_mean_absolute_error' Press any key to continue . . .

glindsell commented 3 years ago

Old thread but for the error above I attempted this and found the following keys available in result.history for tensorflow v2.3.1:

dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy']) 

So if you replace val_mean_absolute_error with val_accuracy for example it works.

My working code below.

    result = model.fit(train_dataset,
            epochs=5,
            verbose=2,
            validation_data=val_dataset)

    # added to collect optimisation results
    if 'results' not in globals():
        global results
        results = []

    val_acc = result.history['val_accuracy']
    parameters = space
    parameters["val_acc"] = val_acc
    parameters["time"] = str(int(time.time() - start_time)) + "sec"
    score, val_acc_final = model.evaluate(val_dataset, verbose=2)
    parameters["val_acc_final"] = val_acc_final
    results.append(parameters)
    print(tabulate(results, headers="keys", tablefmt="fancy_grid", floatfmt=".8f"))