anyoptimization / pymoo

NSGA2, NSGA3, R-NSGA3, MOEAD, Genetic Algorithms (GA), Differential Evolution (DE), CMAES, PSO
https://pymoo.org
Apache License 2.0
2.28k stars 390 forks source link

Empty assertion Error #386

Closed BenediktPrusas closed 1 year ago

BenediktPrusas commented 1 year ago

I get this assertion error while using the NSGA2 algorithm. Unfortunatlly I don't know what the progress is and cant find any dokumentation. Can you give me a hint what might be happening ?

...\venv\lib\site-packages\pymoo\core\termination.py:29, in Termination.update(self, algorithm)
     27 else:
     28     progress = self._update(algorithm)
---> 29     assert progress >= 0.0
     31 self.perc = progress
     32 return self.perc

AssertionError: 
blankjul commented 1 year ago

progress is a float. 0 stands for no progress, and 1 means it has terminated.

Do you have a small code snippet to reproduce your error?

BenediktPrusas commented 1 year ago

Below is a reproduction of the error. The problem is the None in the bounds. Easy to fix as soon as I spotted it. Maybe errors like this could be prevented, by some checks in the init of the ElementwiseProblem, by raising more precise exceptions?


class SimulationProblem(ElementwiseProblem):

    def __init__(self):
        super().__init__(n_var=len(lower_bounds), n_obj=2, xl=np.array((0.0,0.0)), xu=np.array((None,300.0)))

    def _evaluate(self, x, out, *args, **kwargs):
        print(x)
        f1 = x.sum()
        f2 = (x - 200).sum()
        out["F"] = (f1 ,f2)

algorithm = NSGA2(pop_size=5, eliminate_duplicates=True)

res = minimize(
    SimulationProblem(), algorithm, ("n_gen", 2), save_history=False, verbose=True
)
blankjul commented 1 year ago

Thanks for the code snippet! The problem is more of a conceptual one. NSGA2 uses SBX as a crossover and PM as a mutation. Both require lower and upper bounds to be set (which means you can not use None or np.nan.

Is replacing None by a very large number e.g. 1e+12 an option for you? Most algorithms in pymoo require some bounds for sampling in the initial iteration (I know this is different from gradient-based optimizers).