This PR adds three new methods to Exploration that enable a more flexible and interactive use of optimas, similar to what other libraries like Ax and Xopt already have:
attach_trials: Allows the user to manually suggest a set of trials to evaluate, independently of the generator.
evaluate_trials : Allows the user to manually suggest and immediately evaluate a set of trials, independently of the generator.
attach_evaluations: Attach past evaluations from an external source.
All of these methods accept several types of inputs (dictionary, list, dataframe or numpy array). They are internally converted to a pandas dataframe to simplify the implementation.
In addition to the above, Exploration.history now returns a pandas DataFrame with the columns sorted for convenience.
Notes on implementation:
When using attach_trials, the suggested trials are added to the top of the trial queue of the generator (that is, they will be the first ones to be suggested the next time Generator.ask is called. This queue is also a new feature introduced in this PR. When calling ask, the generator tries to get trials from the queue. If the queue does not have enough trials, it will generate new ones and add them to the queue.
evaluate_trials is only a convenient method that calls Exploration.attach_trials and Exploration.run(n_evals) immediately after.
When calling attach_evaluations, the user only needs to provide the basic evaluation data (input parameters, value of objectives, other analyzed parameters). It is not needed to pass other fields in the libensemle history file (like sim_worker, sim_started, etc). However, optimas needs to be able to generate an array that includes these fields. In order to do so, the Exploration now stores a libEnsemble History object, which takes care of initializing an array with all fields required by libEnsemble.
This PR adds three new methods to
Exploration
that enable a more flexible and interactive use of optimas, similar to what other libraries like Ax and Xopt already have:attach_trials
: Allows the user to manually suggest a set of trials to evaluate, independently of the generator.evaluate_trials
: Allows the user to manually suggest and immediately evaluate a set of trials, independently of the generator.attach_evaluations
: Attach past evaluations from an external source.All of these methods accept several types of inputs (dictionary, list, dataframe or numpy array). They are internally converted to a pandas dataframe to simplify the implementation.
In addition to the above,
Exploration.history
now returns a pandas DataFrame with the columns sorted for convenience.Notes on implementation:
attach_trials
, the suggested trials are added to the top of the trial queue of the generator (that is, they will be the first ones to be suggested the next timeGenerator.ask
is called. This queue is also a new feature introduced in this PR. When callingask
, the generator tries to get trials from the queue. If the queue does not have enough trials, it will generate new ones and add them to the queue.evaluate_trials
is only a convenient method that callsExploration.attach_trials
andExploration.run(n_evals)
immediately after.attach_evaluations
, the user only needs to provide the basic evaluation data (input parameters, value of objectives, other analyzed parameters). It is not needed to pass other fields in the libensemle history file (likesim_worker
,sim_started
, etc). However, optimas needs to be able to generate an array that includes these fields. In order to do so, theExploration
now stores a libEnsembleHistory
object, which takes care of initializing an array with all fields required by libEnsemble.