dietmarwo / fast-cma-es

A Python 3 gradient-free optimization library
MIT License
119 stars 17 forks source link
multi-objective optimization parallel quality-diversity

:encoding: utf-8 :imagesdir: tutorials/img :cpp: C++

= fcmaes - a Python 3 gradient-free optimization library

https://gitter.im/fast-cma-es/community[image:https://badges.gitter.im/Join%20Chat.svg[]]

image::logo.gif[]

fcmaes complements https://docs.scipy.org/doc/scipy/reference/optimize.html[scipy optimize] by providing additional optimization methods, faster {cpp}/Eigen based implementations and a coordinated parallel retry mechanism. It supports the multi threaded application of different gradient free optimization algorithms. There are 35 real world https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/Tutorials.adoc[tutorials] showing in detail how to use fcmaes. See https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/Performance.adoc[performance] for detailed fcmaes performance figures.

fcmaes started as a fast CMA-ES implementation combined with a new smart parallel retry mechanism aimed to solve hard optimization problems from the space flight planning domain. It evolved to a general library of state-of-the-art gradient free optimization algorithms applicable to all kind of real world problems covering multi-objective and constrained problems. Its main algorithms are implemented both in Python and C++ and support both parallel fitness function evaluation and a parallel retry mechanism.

=== https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/Tutorials.adoc[Tutorials]

=== https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/Performance.adoc[Performance]

=== Features

=== Changes from version 1.6.3:

=== Changes from version 1.4.0:

Derivative free optimization of machine learning models often have several thousand decision variables and require GPU/TPU based parallelization both of the fitness evaluation and the optimization algorithm. CR-FM-NES, PGPE and the QD-Diversifier applied to CR-FM-NES (CR-FM-NES-ME) are excellent choices in this domain. Since fcmaes has a different focus (parallel optimizations and parallel fitness evaluations) we contributed these algorithms to https://github.com/google/evojax/tree/main/evojax/algo[EvoJax] which utilizes https://github.com/google/jax[JAX] for GPU/TPU execution.

=== Optimization algorithms

To utilize modern many-core processors all single-objective algorithms should be used with the parallel retry for cheap fitness functions, otherwise use parallel function evaluation.

=== Installation

==== Linux

To use the {cpp} optimizers a gcc-9.3 (or newer) runtime is required. This is the default on newer Linux versions. If you are on an old Linux distribution you need to install gcc-9 or a newer version. On ubuntu this is:

Alternatively if you use Anaconda:

==== Windows

For parallel fitness function evaluation use the native Python optimizers or the ask/tell interface of the {cpp} ones. Python multiprocessing works better on Linux. To get optimal scaling from parallel retry and parallel function evaluation use:

The Linux subsystem can read/write NTFS, so you can do your development on a NTFS partition. Just the Python call is routed to Linux. If performance of the fitness function is an issue and you don't want to use the Linux subsystem for Windows, think about using the fcmaes java port: https://github.com/dietmarwo/fcmaes-java[fcmaes-java].

==== MacOS

The {cpp} shared library is outdated, use the native Python optimizers.

=== Usage

Usage is similar to https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html[scipy.optimize.minimize].

For parallel retry use:

[source,python]

from fcmaes.optimizer import logger from fcmaes import retry ret = retry.minimize(fun, bounds, logger=logger())

The retry logs mean and standard deviation of the results, so it can be used to test and compare optimization algorithms: You may choose different algorithms for the retry:

[source,python]

from fcmaes.optimizer import logger, Bite_cpp, De_cpp, Cma_cpp, Sequence ret = retry.minimize(fun, bounds, logger=logger(), optimizer=Bite_cpp(100000)) ret = retry.minimize(fun, bounds, logger=logger(), optimizer=De_cpp(100000)) ret = retry.minimize(fun, bounds, logger=logger(), optimizer=Cma_cpp(100000)) ret = retry.minimize(fun, bounds, logger=logger(), optimizer=Sequence([De_cpp(50000), Cma_cpp(50000)]))

Here https://github.com/dietmarwo/fast-cma-es/blob/master/examples you find more examples. Check the https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/Tutorials.adoc[tutorials] for more details.

=== Dependencies

Runtime:

Compile time (binaries for Linux and Windows are included):

Optional dependencies:

Example dependencies:

=== Citing

[source]

@misc{fcmaes2022, author = {Dietmar Wolz}, title = {fcmaes - A Python-3 derivative-free optimization library}, note = {Python/C++ source code, with description and examples}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {Available at \url{https://github.com/dietmarwo/fast-cma-es}}, }