anyoptimization / pymoo

NSGA2, NSGA3, R-NSGA3, MOEAD, Genetic Algorithms (GA), Differential Evolution (DE), CMAES, PSO
https://pymoo.org
Apache License 2.0
2.27k stars 390 forks source link

Resuming optimization from result object #641

Open Leviathan321 opened 1 month ago

Leviathan321 commented 1 month ago

I have executed an optimization run, and backed up the result object using dill as proposed in the Tutorial. Let say it executed 2 iterations. As checkpoint I use the last algorithm object of its history.

When I want to resume the search for a number of generations, let say 3, I encounter that the new result object has more algorithm objects in each history then the number of iterations. While it should have 5 algorithm objects, it does have 6. So it does one additional iteration. In addition, the very last generation is lost/overwritten from the previous search.

Is there anything wrong with my code?

termination = ...
history = result.history
problem = result.problem

checkpoint_algo = result.history[-1]
checkpoint_algo.start_time = time.time() - previous_exec_time
checkpoint_algo.termination = termination

# reset
checkpoint_algo.problem = None

res: ResultExtended = minimize(problem,
                checkpoint_algo,
                termination,
                save_history=True,
                copy_algorithm=True,
                verbose=True,
                callback=Callback(),
                history=history)
blankjul commented 1 month ago

What checkpointing method are you using? The object oriented from here? https://pymoo.org/misc/checkpoint.html?highlight=checkpoint#Object-Oriented

Can you provide a short example for me to reproduce the gap in history? Would a simple algorithm.history.append(deepcopy(algorthm)) or something similar solve your issue?