cms-patatrack / The-Optimizer

Mozilla Public License 2.0
3 stars 4 forks source link

The Optimizer

The Optimizer is a python package that provides a collection of powerful optimization algorithms, including MOPSO (Multi-Objective Particle Swarm Optimization). The primary purpose of this package is to facilitate running optimization tasks using user-defined Python functions as the optimization target. The package is developed with the objectives of CMS and Patatrack in mind.

Installation

To use The Optimizer package, you can install it using pip and the provided setup.py script.

  1. Clone this repository
  2. Navigate to the project directory
  3. Install the package and its dependencies using pip:

    pip install .

You can install a project in “editable” or “develop” mode while you’re working on it. When installed as editable, a project can be edited in-place without reinstallation:

python3 -m pip install -e .

Usage

Currently the package provides the optimizer module that defines an optimization algorithm: MOPSO. The optimizer relies on a few helper classes to handle configuration and the objective functions. To use this module in your Python projects:

  1. Import the required modules:

    import optimizer
  2. Define the objective function to be optimized. I.e.:

    def f1(x):
        return 4 * x[0]**2 + 4 * x[1]**2
    
    def f2(x):
        return (x[0] - 5)**2 + (x[1] - 5)**2
    
    objective = optimizer.ElementWiseObjective([f1, f2])
  3. Define the boundaries of the parameters:

    lb = [0.0, 0.0]
    ub = [5.0, 3.0]
  4. Create the MOPSO object with the configuration of the algorithm

    mopso = optimizer.MOPSO(objective=objective,
                            lower_bounds=lb, upper_bounds=ub,
                            num_particles=50,
                            inertia_weight=0.4, cognitive_coefficient=1.5, social_coefficient=2,
                            initial_particles_position='random', exploring_particles=True,
                            max_pareto_lenght=100)
  5. Run the optimization algorithm

    pareto_front = mopso.optimize(num_iterations = 100)

Objective Function

Depending on the optimization mode, the Objective function can be defined in two way:

  1. As Objective, the objective function is evaluated once per iteration and is called as:

    f([particle.position for particle in self.particles])

    the argument of the optimization function is a list of arrays: an array of parameters for each particle.
    The output is a list of fitnesses: the value(s) to minimize for each particle.

  2. As ElementWiseObjective, the objective function is evaluated particle by particle at every iteration and is called as:

    f(self.position) # self is a Particle object

    the argument of the optimization function is an array of elements corresponding to the parameters to optimize.
    The output is the fitness of the particle: the value(s) to minimize in order to solve the optimization problem.

See the tests and examples folders for examples.

MOPSO

The Multi-Objective Particle Swarm Optimization (MOPSO) algorithm is a versatile optimization tool designed for solving multi-objective problems. It leverages the concept of swarm to navigate the search space and find optimal solutions.

See the docstring for additional information on the parameters.

Checkpoint system

The optimizer can be run using the optimize method for a specific number of iterations, or it can also be run interactively by calling the step function to perform a single iteration.

In addition the optimizer allows to stop the execution, saving the state, and restore the execution from the leftover run.

To do this, first enable saving and enabling using the FileManager helper class:

optimizer.FileManager.working_dir = "tmp/zdt1/"
optimizer.FileManager.loading_enabled = True
optimizer.FileManager.saving_enabled = True

After launching optimize the state of the optimizer will be saved in the mopso.pkl file inside the working directory.

A new run of the script will attempt to load the file and restart the execution from the iteration it was stopped at.

For example, if you run optimize until iteration 100, save and then rerun till iteration 200, the optimizer will call the step function for iteration 101 to 200.

The saving option allow also to export the state of the particles in every iteration inside a history directory in the working directory.

Random

The MOPSO optimizer heavily relies on randomnumber generation. To make sure to obtain reproducible results an helper function allows to set the seed for every random generation performed by the algortihm:

optimizer.Randomizer.rng = np.random.default_rng(42)

Logging

You can configure the amount of logging information printed on terminal with:

optimizer.Logger.setLevel('DEBUG')

The supported levels - from least to most verbose - are: ERROR, WARN, INFO, DEBUG

Contributing

Contributions are welcome. If you want to contribute, please follow the Contribution guidelines.

License

The Optimizer is distributed under the MPL 2.0 License. Feel free to use, modify, and distribute the code following the terms of the license.