numbbo / coco

Numerical Black-Box Optimization Benchmarking Framework
https://numbbo.github.io/coco
Other
262 stars 87 forks source link

coco queries #1782

Closed mudita11 closed 6 years ago

mudita11 commented 6 years ago

I would like to ask few doubts from the following definition of coco_optimize to understand the working of different solver options given:

def coco_optimize(solver, fun, max_evals, maxruns=1): range = fun.upper_bounds - fun.lower_bounds center = fun.lowerbounds + range / 2 if fun.evaluations: print('WARNING: %d evaluations were done before the first solver call' % fun.evaluations)

for restarts in range(int(max_runs)):
    remaining_evals = max_evals - fun.evaluations
    x0 = center + (restarts > 0) * 0.8 * range_ * (
            np.random.rand(fun.dimension) - 0.5)
    fun(x0)  # can be incommented, if this is done by the solver
    if solver.__name__ in ("random_search", ):
        solver(fun, fun.lower_bounds, fun.upper_bounds,
               remaining_evals)
    elif solver.__name__ == 'fmin' and solver.__globals__['__name__'] in ['cma', 'cma.evolution_strategy', 'cma.es']:
        if x0[0] == center[0]:
            sigma0 = 0.02
            restarts_ = 0
        else:
            x0 = "%f + %f * np.random.rand(%d)" % (
                    center[0], 0.8 * range_[0], fun.dimension)
            sigma0 = 0.2
            restarts_ = 6 * (observer_options.as_string.find('IPOP') >= 0)

        solver(fun, x0, sigma0 * range_[0], restarts=restarts_,
               options=dict(scaling=range_/range_[0], maxfevals=remaining_evals,
                            termination_callback=lambda es: fun.final_target_hit,
                            verb_log=0, verb_disp=0, verbose=-9))
    elif solver.__name__ == 'fmin_slsqp':
        solver(fun, x0, iter=1 + remaining_evals / fun.dimension,
               iprint=-1)
    elif True:
        solver(fun, fun.lower_bounds, fun.upper_bounds, remaining_evals)
    else:
        raise ValueError("no entry for solver %s" % str(solver.__name__))

    if fun.evaluations >= max_evals or fun.final_target_hit:
        break
    if fun.evaluations <= max_evals - remaining_evals:
        if max_evals - fun.evaluations > fun.dimension + 1:
            print("WARNING: %d evaluations remaining" %
                  remaining_evals)
        if fun.evaluations < max_evals - remaining_evals:
            raise RuntimeError("function evaluations decreased")
        break
return restarts + 1

Question1: Why cmaes is starting with special point 'x0'? Am I right to say that because population-based algorithms don't have one point to start with, it won't be a fair comparison between population-based algorithm and cmaes (if cmaes starts with point 'x0')? And 'xo' is defined this way

Question2: To set solver as 'cmaes', can you refer me to cmaes code in coco package?

Question3: I want an insight on how many times algorithm is run to calculate runtime for different targets, t (say t=51). When an algorithm is run on a problem (f,n,j,t) (let's say there is one problem f with dim=n and an instance number j), is it run 51 times (one run for each target) or its run only once storing the runtimes simultaneously when the target is reached? Can you refer me to a paper or give an explanation for this, please?

I would really appreciate if you could answer the above questions in any way, by referring to a paper or giving insight yourself by writing few lines. Thank you.

brockho commented 6 years ago

Dear Mudita,

Let me start with some answers and the other COCO developers can add to it if necessary.

ad 1) CMA-ES simply requires an initial search point as one of its input parameters such as many other single-objective algorithms do (for example the famous BFGS but also SLSQP, Cobyla, fmincon in Matlab, ...). Comparisons are fair in the sense that population-based algorithms are free to include x_0 into their own initial population. Note that the initial search point suggested by COCO is on some functions better than the expected function value of a random point - with this respect, an algorithm that (on purpose or not) evaluates the search space origin (the suggested x_0 of COCO) will appear typically better than an algorithm that does not. This is the main reason why we include the line

fun(x0) # can be incommented, if this is done by the solver

into the example experiment to give all algorithms this "advantage".

ad 2) The CMA-ES algorithm itself is not part of the coco package. Try pip install cma to install it from PyPI. See also https://github.com/CMA-ES/pycma for more details.

ad 3) For each combination of function, dimension, and instance, we typically ask that only one run is performed. The data points for all targets are then recorded from the same run. Note that we actually do not record only the 51 displayed targets (this is a setting, given by the postprocessing), but we record a certain number of targets per order of magnitude (in function value).

I hope this already answers most of your questions. For more information, please refer to the documentation of COCO which can be found at the end of the COCO github page (https://github.com/numbbo/coco) and most promising for your questions, in the general COCO introduction (http://numbbo.github.io/coco-doc/) and in the performance assessment documentation (http://numbbo.github.io/coco-doc/perf-assessment/).

If things continue to be unclear, please let us know. We are always happy to receive feedback about what is missing in the documentation or where it can be made clearer.

mudita11 commented 6 years ago

Thank you detailed answer. It was clear and indeed helpful. I want to know how can an algorithm, during the run, return the target with runtime? What are the variables and methods representing these, which are helpful in the retrieval of this data?

brockho commented 6 years ago

The targets are not supposed to be visible to the algorithm because this would immediately mean that the algorithm has information about the optimal function value.

But with COCO, the algorithm also does not have to have this information: each time the algorithm evaluates the objective function, COCO records everything important for you and writes the information into the corresponding .dat and .tdat files which the postprocessing later on can visualize.

Probably still the best information about this issue (though slightly outdated with the latest data formats) can be found in Appendix E of http://coco.lri.fr/downloads/download15.03/bbobdocexperiment.pdf