numbbo / coco

Numerical Black-Box Optimization Benchmarking Framework
https://numbbo.github.io/coco
Other
261 stars 86 forks source link

Experimentation with dynamic instance repetitions #1978

Open nikohansen opened 4 years ago

nikohansen commented 4 years ago

We may want to encourage experimenters to repeat trials on instances depending on the observed success, see also #1117. A simple variation of the current setup would be to conduct new runs on new problem instantiations of the same function instance rather than restarts on the very same problem instantiation (as currently).

Depending on the language for the experiments, this may not be straightforward to implement. We should consider to implement a "stay-with-current-instance" signal for the iterator on the C source implementation level.

ttusar commented 4 years ago

In C, something like this should do the job:

coco_problem_t *coco_problem_reset(coco_problem_t *problem, coco_observer_t *observer) {

  coco_problem_t *inner_problem, *reset_problem;
  inner_problem = coco_problem_remove_observer(problem, observer);
  reset_problem = coco_problem_add_observer(inner_problem, observer);
  return reset_problem;
}
nikohansen commented 4 years ago

The natural candidates in Python are

problem.remove_observer().observe_with(observer)
problem.remove_observer().add_observe(observer)
problem.observer_reset(observer)
problem.observer_reset()

where the last seems to me the best. The first two seem overly verbose and the third seems "wrong" because there is actually no choice in which observer to reset.

It quite looks like the Cython interface can (and should?) be updated based on the currently available C code interface.

On a side note, the "correct" C name seems to be:

coco_problem_t *coco_problem_reset_observer(coco_problem_t *problem, coco_observer_t *observer)
nikohansen commented 4 years ago

A small question: does the value of coco_problem_final_target_hit(problem) change when the observer is reset?

ttusar commented 4 years ago

Yes, I imagine it does.

ttusar commented 4 years ago

Yes, I imagine it does.

Actually, I take it back. The result of this function depends on best_observed_fvalue that is updated within coco_evaluate_function, so I think the value of coco_problem_final_target_hit(problem) does not change.

nikohansen commented 4 years ago

Then, I would assume the same is true for coco_problem_get_evaluations(problem).

Indeed, yet this

  logger->number_of_evaluations_constraints = coco_problem_get_evaluations_constraints(problem);
  logger->number_of_evaluations++; /* could be != coco_problem_get_evaluations(problem) for non-anytime logging? */

looks kind-of inconsistent for constraints evaluations then.

ttusar commented 4 years ago

Counting of constraint evaluations is complicated...

nikohansen commented 4 years ago

OK, but this means that the current idea does not work with constraints unless changing either the logger code (different counting approach, which is complicated) or the observer code (reset the constraints counter of the problem, which is intrusive). However, not working with contraints looks like a deal breaker to me.

nikohansen commented 4 years ago

A possible fix is replacing:

logger->number_of_evaluations_constraints = 
    coco_problem_get_evaluations_constraints(problem);

with

double evalscon = coco_problem_get_evaluations_constraints(problem)
    - logger->last_number_of_evaluations_constraints;
assert conevals >= 0;
logger->number_of_evaluations_constraints += evalscon;
logger->last_number_of_evaluations_constraints += evalscon;

in which case the new logger must "inherit" last_number_of_evaluations_constraints from the previous logger during the reset.

Another possible fix is that problems provide a evaluations_constraints_unlogged counter that is read out and reset by the logger.

nikohansen commented 4 years ago

Change of calling sequence in Python:

```python evalsleft = lambda: int(problem.dimension * budget_multiplier + 1 - max((problem.evaluations, problem.evaluations_constraints))) # PREVIOUS: problem.observe_with(observer) # generate the data for cocopp post-processing problem(np.zeros(problem.dimension)) # making algorithms more comparable propose_x0 = problem.initial_solution_proposal # callable, all zeros in first call time1 = time.time() # apply restarts while evalsleft() > 0 and not problem.final_target_hit: fmin(problem, ...) # NEW PROPOSAL: problem.observe_with(observer) # generate the data for cocopp post-processing time1 = time.time() # apply restarts while evalsleft() > 0 and not problem.final_target_hit: # we could think of wanting more than one hit problem(np.zeros(problem.dimension)) # making algorithms more comparable # these are not independent restarts, which could make the performance evaluation worse if we ask for more than one final target hit propose_x0 = problem.initial_solution_proposal # callable, all zeros in first call fmin(problem, ...) problem.observer_reset() # generate a new trial for cocopp post-processing, # ANOTHER NEW PROPOSAL: time1 = time.time() # apply restarts while evalsleft() > 0 and not problem.final_target_hit: # we could think of wanting more than one hit problem.observe_with(observer) # add or reset observer to generate the data for cocopp problem(np.zeros(problem.dimension)) # making algorithms more comparable # these are not independent restarts, which could make the performance evaluation worse if we ask for more than one final target hit propose_x0 = problem.initial_solution_proposal # callable, all zeros in first call fmin(problem, ...) ``` see also #1982 motivating the last version.

It may make sense to "abuse" the repeated call of problem.observe_with(observer) as a signal to reset the observer. The expected current behavior, wrapping another logger by the same observer (is it?), seems to not make sense anyway.