jMetal / jMetalPy

A framework for single/multi-objective optimization with metaheuristics
https://jmetal.github.io/jMetalPy/index.html
MIT License
497 stars 150 forks source link

OMOPSO with multi-process failed #159

Open jacktang opened 1 year ago

jacktang commented 1 year ago

Hello,

I want run OMOPSO in parallel using multi-process, and the code looks like

problem = ZDT1()

mutation_probability = 1.0 / problem.number_of_variables
max_evaluations = 25000
swarm_size = 100

algorithm = OMOPSO(
    problem=problem,
    swarm_size=swarm_size,
    epsilon=0.0075,
    uniform_mutation=UniformMutation(probability=mutation_probability, perturbation=0.5),
    non_uniform_mutation=NonUniformMutation(mutation_probability, perturbation=0.5,
                                            max_iterations=int(max_evaluations / swarm_size)),
    leaders=CrowdingDistanceArchive(100),
    termination_criterion=StoppingByEvaluations(max_evaluations=max_evaluations),
    swarm_evaluator = MultiprocessEvaluator(processes=4),
)

and the error:

File ~/opt/miniconda3/lib/python3.9/site-packages/jmetal/core/algorithm.py:84, in Algorithm.run(self)
     82 LOGGER.debug('Running main loop until termination criteria is met')
     83 while not self.stopping_condition_is_met():
---> 84     self.step()
     85     self.update_progress()
     87 self.total_computing_time = time.time() - self.start_computing_time

File ~/opt/miniconda3/lib/python3.9/site-packages/jmetal/core/algorithm.py:222, in ParticleSwarmOptimization.step(self)
    220 self.update_position(self.solutions)
    221 self.perturbation(self.solutions)
--> 222 self.solutions = self.evaluate(self.solutions)
    223 self.update_global_best(self.solutions)
    224 self.update_particle_best(self.solutions)

File ~/opt/miniconda3/lib/python3.9/site-packages/jmetal/algorithm/multiobjective/omopso.py:93, in OMOPSO.evaluate(self, solution_list)
     92 def evaluate(self, solution_list: List[FloatSolution]):
---> 93     return self.swarm_evaluator.evaluate(solution_list, self.problem)
...
    769     return self._value
    770 else:
--> 771     raise self._value

MaybeEncodingError: Error sending result: '[<jmetal.core.solution.FloatSolution object at 0x7ff59f4abc40>, <jmetal.core.solution.FloatSolution object at 0x7ff5a2248a30>, <jmetal.core.solution.FloatSolution object at 0x7ff5a22d5820>, <jmetal.core.solution.FloatSolution object at 0x7ff5a2303ee0>, <jmetal.core.solution.FloatSolution object at 0x7ff5a20cb8b0>, <jmetal.core.solution.FloatSolution object at 0x7ff5a2012610>, <jmetal.core.solution.FloatSolution object at 0x7ff5a22b6340>]'. Reason: 'RecursionError('maximum recursion depth exceeded while calling a Python object')'

Is it a bug?

jacktang commented 1 year ago

Well, I've tested both in macos and ubuntu, and above bug only occured in macos, I paste detailed error

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 125, in _main
    prepare(preparation_data)
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 236, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
    main_content = runpy.run_path(main_path,
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 289, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 96, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/apple/Codes/caebigdata/fsj-reservoirs-control/reservoirs_optimized_control.py", line 70, in <module>
    result_opt = oc.run(process_num = 20, ret_sol_num = 2, save_fig=True, verbose=True, df_comp=None)
  File "/Users/apple/Codes/caebigdata/fsj-reservoirs-control/lib/models/pso_optimized_control.py", line 27, in run
    res = self.optimize(p, process_num, ret_sol_num, verbose, save_fig, df_comp)
  File "/Users/apple/Codes/caebigdata/fsj-reservoirs-control/lib/models/pso_model.py", line 85, in optimize
    swarm_evaluator = MultiprocessEvaluator(processes=1),
  File "/usr/local/lib/python3.10/site-packages/jmetal/util/evaluator.py", line 55, in __init__
    self.pool = Pool(processes)
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/pool.py", line 215, in __init__
    self._repopulate_pool()
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/pool.py", line 306, in _repopulate_pool
    return self._repopulate_pool_static(self._ctx, self.Process,
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/pool.py", line 329, in _repopulate_pool_static
    w.start()
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/context.py", line 288, in _Popen
    return Popen(process_obj)
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 42, in _launch
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 154, in get_preparation_data
    _check_not_importing_main()
  File "/usr/local/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/spawn.py", line 134, in _check_not_importing_main
    raise RuntimeError('''
RuntimeError:
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.