Project-Platypus / Platypus

A Free and Open Source Python Library for Multiobjective Optimization
GNU General Public License v3.0
565 stars 153 forks source link

Tracking convergence of the optimization #104

Closed pimhof closed 1 year ago

pimhof commented 5 years ago

Is there a way to track the convergence of the optimization (e.g. NSGA-II) in Platypus? I was unable to find a way to output the hypervolume or epsilon progress after each generation. Does anyone know if this is possible?

apoorvreddy commented 5 years ago

Define a custom TerminationCondition

from platypus import  TerminationCondition

class StoppingCondition(TerminationCondition):
    def __init__(self, max_iterations):
        """
        Parameters:
        max_iterations (int): Do not run optimizer for more than this many iterations
        """
        super(StoppingCondition, self).__init__()
        self.max_iterations = max_iterations
        self.iteration = 0

    def initialize(self, algorithm):
        pass

    def shouldTerminate(self, algorithm):
        self.iteration += 1
        for p in algorithm.population:
            if p.feasible:
                self.current_iter_obj_value = p.objectives[0]
                break
        print("iteration: ", self.iteration, "objectives: ", p.objectives)

        if self.iteration < self.max_iterations:
            return False
        else:
            return True

Then after you have defined your Problem and algorithm, for example

# convex problem objective = ((x-5)/4)^2 + ((y-3)/2)^2 with constraint x >= 10 and y>=4. So solution is (10, 4)
def convex_objective(x):
    x0 = x[0]
    x1 = x[1]
    return ((x0-5)/4)**2 + ((x1-3)/2) ** 2, (x[0] - 10, x[1] - 4)

num_vars = 2
num_objs = 1
num_cons = 2
problem = Problem(num_vars, num_objs, num_cons)
problem.types[:] = Real(0, 50)
problem.directions[:] = Problem.MINIMIZE
problem.constraints[:] = '>=0'
problem.function = convex_objective

def run_opt():
    iteration_history = []
    random.seed(24837102)
    stop = StoppingCondition(1, 2, 1e-2, 30)
    with ProcessPoolEvaluator(8) as evaluator:
        algorithm = NSGAII(problem
                           , population_size=100
                           , variator=GAOperator(SBX(probability=0.9
                                                     , distribution_index=20)
                                                 , PM(probability=0.1
                                                      , distribution_index=10))
                           , evaluator=evaluator
                           , archive=iteration_history)
        algorithm.run(stop)
    return algorithm, iteration_history, stop

alg, hist, stop = run_opt()
print(stop.history)
apoorvreddy commented 5 years ago

Ignore the above snippet. A better way to do this would probably be by subclassing the Archive class as illustrated here

github-actions[bot] commented 1 year ago

This issue is stale and will be closed soon. If you feel this issue is still relevant, please comment to keep it active. Please also consider working on a fix and submitting a PR.