esa / pagmo2

A C++ platform to perform parallel computations of optimisation tasks (global and local) via the asynchronous generalized island model.
https://esa.github.io/pagmo2/
GNU General Public License v3.0
808 stars 160 forks source link

[BUG] Consecutive calls to PSO is not necessarily the same of running with multiple generations #486

Open tarcisiofischer opened 2 years ago

tarcisiofischer commented 2 years ago

According to this discussion, all algorithms should behave the same with gen=N vs N calls with gen=1 (Did I understand correcly?).

I tried with PSO in pygmo, and couldn't make it work (Will explain better after showing the code):

def pso_run_a(problem, seed):
    population = pg.population(pg.problem(problem), N_PARTICLES, seed=seed)
    algorithm = pg.algorithm(
        pg.pso(
            gen=1,
            omega=1,
            eta1=1,
            eta2=1,
            seed=seed,
            memory=True,
        )
    )
    for i in range(N_GENERATIONS):
        population = algorithm.evolve(population)
        print("Champ> ", population.champion_f[0])
def pso_run_b(problem, seed):
    population = pg.population(pg.problem(problem), N_PARTICLES, seed=seed)
    algorithm = pg.algorithm(
        pg.pso(
            gen=N_GENERATIONS,
            omega=1,
            eta1=1,
            eta2=1,
            seed=seed,
            memory=True,
        )
    )
    # algorithm.set_verbosity(1)
    population = algorithm.evolve(population)
    print("Champ> ", population.champion_f[0])
class MyProblem:
    def fitness(self, x):
        return [np.sum(x**2)]

    def get_bounds(self):
        return ([-10, -10], [10, 10])

Results: pso_run_a: 35.042803177459504 pso_run_b: 14.44962519803876


It seems to be a bug related to the way the population evolves. AFAIU, when returning from the algorithm, PSO will return the best fit ( important code here ), and then, in the next run, X will be in relation to lbX. While when running the code iteratively with multiple generations (and assuming that one generation didn't perform better than the previous, then X and lbX will be different ( important code here )

There are several ways of solving this. The simplier, I believe, is to update keep an internal m_X in the PSO algorithm class. What are your thoughts around this? Are you interest in a specific fix just to the PSO algorithm (I can open a PR)?

Thanks in advance!

darioizzo commented 2 years ago

Indeed it's a bug. I just noted the equivalence is not tested for pso. Have a look to the test for cmaes here:

https://github.com/esa/pagmo2/blob/16fced244537f6ba1a5dc8872e410e333688834f/tests/cmaes.cpp#L266

A similar test should be added also for pso, and it should pass.

Anything that needs to be remembered from previous generations should Indeed be in the algo memory. When memory=true it should be remembered.

A PR would indeed be appreciated ...

darioizzo commented 2 years ago

In case you do open a PR, could you also make the symmetric modifications to pso_gen?

tarcisiofischer commented 2 years ago

@darioizzo I didn't make the pso_gen modifications yet because I wanted to show the pso version first, so you can validate if everything is as expected in terms of project patterns, and also your expectation around this. Feel free to ask for modifications.

Also note that I had to make further modifications in order to make all variations of pso to work in the test case. Perhaps you find it to be a little overengineered... Feel free to suggest modifications around the implementation. I tried to make it as explicit as I could to avoid confusion for future readers from the code.

I'm planning on making similar changes (and adding similar test) to pso_gen after you take a look at the PR :)

Thanks in advance!