thieu1995 / mealpy

A Collection Of The State-of-the-art Metaheuristic Algorithms In Python (Metaheuristic/Optimizer/Nature-inspired/Biology)
https://mealpy.readthedocs.io
GNU General Public License v3.0
871 stars 177 forks source link

[FEAT]: Explicit objective function evaluation limitation #148

Closed msotocalvo closed 5 months ago

msotocalvo commented 5 months ago

Description

In order to be able to establish a fair comparison between algorithms, a limit in terms of number of evaluations of the objective function (not synonymous with iterations or epochs) is needed. Would you mind implementing the possibility of setting an explicit limitation globally? Thanks in advance.

Additional Information

I've been implementing it in some algorithms I'm currently using. This is an example how to do it with ABC. Note i have added the argument 'max_eval' as well as a counter 'self.function_evaluations':

import numpy as np

class OriginalABC(Optimizer):: def init(self, epoch: int = 10000, pop_size: int = 100, n_limits: int = 25, max_evals: int = 100000, kwargs: object) -> None: """ Args: epoch: maximum number of iterations, default = 10000 pop_size: number of population size = onlooker bees = employed bees, default = 100 n_limits: Limit of trials before abandoning a food source, default=25 max_evals: maximum number of function evaluations, default=100000 """ super().init(kwargs) self.epoch = self.validator.check_int("epoch", epoch, [1, 100000]) self.pop_size = self.validator.check_int("pop_size", pop_size, [5, 10000]) self.n_limits = self.validator.check_int("n_limits", n_limits, [1, 1000]) self.max_evals = self.validator.check_int("max_evals", max_evals, [1, 1000000]) self.is_parallelizable = False self.set_parameters(["epoch", "pop_size", "n_limits", "max_evals"]) self.sort_flag = False self.function_evaluations = 0 # Initialize the function evaluations counter

def initialize_variables(self):
    self.trials = np.zeros(self.pop_size)

def evolve(self, epoch):
    """
    The main operations (equations) of the algorithm. Inherit from Optimizer class

    Args:
        epoch (int): The current iteration
    """
    for idx in range(self.pop_size):
        if self.function_evaluations >= self.max_evals:
            break  # Stop the process if max evaluations limit is reached
        rdx = self.generator.choice(list(set(range(self.pop_size)) - {idx}))
        phi = self.generator.uniform(low=-1, high=1, size=self.problem.n_dims)
        pos_new = self.pop[idx].solution + phi * (self.pop[rdx].solution - self.pop[idx].solution)
        pos_new = self.correct_solution(pos_new)
        agent = self.generate_agent(pos_new)
        self.function_evaluations += 1
        if self.compare_target(agent.target, self.pop[idx].target, self.problem.minmax):
            self.pop[idx] = agent
            self.trials[idx] = 0
        else:
            self.trials[idx] += 1

    for idx in range(self.pop_size):
        if self.function_evaluations >= self.max_evals:
            break  # Stop the process if max evaluations limit is reached
        employed_fits = np.array([agent.target.fitness for agent in self.pop])
        selected_bee = self.get_index_roulette_wheel_selection(employed_fits)
        rdx = self.generator.choice(list(set(range(self.pop_size)) - {idx, selected_bee}))
        phi = self.generator.uniform(low=-1, high=1, size=self.problem.n_dims)
        pos_new = self.pop[selected_bee].solution + phi * (self.pop[rdx].solution - self.pop[selected_bee].solution)
        pos_new = self.correct_solution(pos_new)
        agent = self.generate_agent(pos_new)
        self.function_evaluations += 1
        if self.compare_target(agent.target, self.pop[selected_bee].target, self.problem.minmax):
            self.pop[selected_bee] = agent
            self.trials[selected_bee] = 0
        else:
            self.trials[selected_bee] += 1

    abandoned = np.where(self.trials >= self.n_limits)[0]
    for idx in abandoned:
        if self.function_evaluations >= self.max_evals:
            continue  # Skip generation if max evaluations limit is reached
        self.pop[idx] = self.generate_agent()
        self.trials[idx] = 0
        self.function_evaluations += 1
thieu1995 commented 5 months ago

@msotocalvo,

I recommend you spend time and read the document. We already have it. https://mealpy.readthedocs.io/en/latest/pages/general/advance_guide.html#stopping-condition-termination