optimas-org / optimas

Optimization at scale, powered by libEnsemble
https://optimas.readthedocs.io
Other
22 stars 13 forks source link

Allow attaching past evaluations that are outside of the current parameter range #185

Closed AngelFP closed 2 months ago

AngelFP commented 5 months ago

This enables, among others, the possibility of resuming an old exploration with an updated range of the varying parameters.

Previously, trying to add trials outside of the current design space when using and AxServiceGenerator would fail because the AxClient would not allow it. In this PR, this workaround is used when fit_out_of_design=True so that trials outside of the current range will still contribute to the surrogate model.

When fit_out_of_design=False (default), the out-of-range trials will be ignored by the generator and the user will be informed about it.

delaossa commented 2 months ago

With regards to my first comment, I wonder if it would be a good idea to backup automatically the history file from which the exploration run is resuming. Previous versions had a feature like this if I recall well.

About the other point, I just added a commit where the same strategy for considering (or not) trials with parameters outside the design range is replicated in AxModelManager.

delaossa commented 2 months ago

A unit test is failing here:

FAILED tests/test_ax_generators.py::test_ax_single_fidelity_resume - AssertionError: assert 'Sobol' == 'GPEI'

This happens when the exploration resumes with fit_out_of_design=True. It is expected that, when trials with parameters outside the design range are added, the next generated point uses the model. However, the generation method is still 'Sobol' in this case.

Could the expected behavior be in conflict with the changes implemented here? -> https://github.com/optimas-org/optimas/pull/207

delaossa commented 2 months ago

I have made additional checks and it seems that this issue is always present when one resumes an optimas run. So it is nothing particular to this PR, but rather to this -> https://github.com/optimas-org/optimas/pull/207 or even from before. However only the new unit tests associated to this PR are failing, because it is the only one checking the generation method of the new trials after resuming.

AngelFP commented 2 months ago

With regards to my first comment, I wonder if it would be a good idea to backup automatically the history file from which the exploration run is resuming. Previous versions had a feature like this if I recall well.

About the other point, I just added a commit where the same strategy for considering (or not) trials with parameters outside the design range is replicated in AxModelManager.

Thanks for adding fit_out_of_design to the AxModelManager. I'm not a fan of how we currently have to duplicate some code in the generator and the AxModelManager, so we should find a way of unifying this in a future PR.

I will have a look at the other issues.

AngelFP commented 2 months ago

A unit test is failing here:

FAILED tests/test_ax_generators.py::test_ax_single_fidelity_resume - AssertionError: assert 'Sobol' == 'GPEI'

This happens when the exploration resumes with fit_out_of_design=True. It is expected that, when trials with parameters outside the design range are added, the next generated point uses the model. However, the generation method is still 'Sobol' in this case.

Could the expected behavior be in conflict with the changes implemented here? -> #207

Ok, this issue should be fixed now.

AngelFP commented 2 months ago

With regards to my first comment, I wonder if it would be a good idea to backup automatically the history file from which the exploration run is resuming. Previous versions had a feature like this if I recall well.

I think that saving a backup (or just using a different name for the history file after resume) could indeed be a good idea. Otherwise it will always be overwritten and can lead to issues like the one you describe. This should be the focus of a future PR.