Open ghost opened 6 years ago
From what I understand you run 4 times (in parallel) the optimization with 4 cpu each. Is that correct? If so, this is absolutly normal. Try to run the optimisation 4 times in a row and you won't get the same results. Welcome to stochastic optimization :-).
Looks like your problem is that you're running the same GA algorithm 4 times independently on each node. If you want one hall of fame the algorithm needs to be run one time, this requires a master-slave setup, I don't know how you achieve this with scoop (scoop appeared very buggy to me) but with mpi4py you would do something like this:
from mpi4py.futures import MPIPoolExecutor if name=='main': with MPIPoolExecutor() as executor: toolbox.register("map", executor.map)
then run the code like this: NUM_MASTERS=1 # should only ever be 1 NUM_SLAVES=5 mpiexec -n $NUM_MASTERS -usize (($NUM_SLAVES + $NUM_MASTERS)) python GA_script.py
Already posted it on Stackoverflow but might be helpful here as well: https://stackoverflow.com/questions/51047445/python-deap-scoop-different-result-hall-of-fame-for-each-node
I'm using
to run the genetic algorithm example scoop/examples/deap_ga_onemax.py from https://github.com/soravux/scoop/blob/master/examples/deap_ga_onemax.py
on a HPC cluster using a SLURM script, see code posted below.
Please note, that line number 71 in deap_ga_onemax.py has been commented:
#random.seed(64)
Python code:
SLURM script:
Problem: Running deap_ga_onemax.py in parallel on 4 nodes with 4 processors each results in 4 different hall of fames. Running on 3 nodes results in 3 different hall of fames and so on. This way, each node has its own hall of fame.
How can I obtain one hall of fame containing the results from all nodes?
Any insight would be very helpful...