numericalalgorithmsgroup / pybobyqa

Python-based Derivative-Free Optimization with Bound Constraints
https://numericalalgorithmsgroup.github.io/pybobyqa/
GNU General Public License v3.0
79 stars 18 forks source link

Multiprocessing #22

Closed csaiedu closed 3 years ago

csaiedu commented 3 years ago

Does pybobyqa supports multiprocessing? I have tried to speed up some optimization but with no effect:

The execution time in multiprocessing is twice slower than sequentially.

import numpy as np import pybobyqa from multiprocessing import Pool import time

Define the objective function

def rosenbrock(x): return 100.0 * (x[1] - x[0] 2) 2 + (1.0 - x[0]) ** 2

Define the starting point

x0 = np.array([-1.2, 1.0])

Call Py-BOBYQA sequentially

starttime = time.time() for in range(10): res=pybobyqa.solve(rosenbrock, x0) print("--- %s seconds ---" % (time.time() - start_time))

Call Py-BOBYQA in multiprocessing

work= [x0]*10 def work_log(work_data): soln=pybobyqa.solve(rosenbrock, work_data) return soln.f

def pool_handler(): p = Pool(5) res=p.map(work_log, work)

if name == 'main': start_time = time.time() res=pool_handler() print("--- %s seconds ---" % (time.time() - start_time))

lindonroberts commented 3 years ago

I'm not sure what is happening here, as I get a good speedup when doing parallel runs of Py-BOBYQA. On my machine using Python 3.6 on Linux, I ran 20 jobs over 5 processors:

import numpy as np
import pybobyqa
from multiprocessing import Pool
import time

def rosenbrock(x):
    return 100.0 * (x[1] - x[0] ** 2) ** 2 + (1.0 - x[0]) ** 2

x0 = np.array([-1.2, 1.0])
njobs = 20
nprocs = 5

# Sequential
start_time = time.time()
for _ in range(njobs):
    res = pybobyqa.solve(rosenbrock, x0)
print("--- Sequential: %s seconds ---" % (time.time() - start_time))

# Parallel
work = [x0]*njobs
def work_log(work_data):
    soln = pybobyqa.solve(rosenbrock, work_data)
    return soln.f

def pool_handler():
    p = Pool(nprocs)
    res = p.map(work_log, work)

start_time = time.time()
res = pool_handler()
print("--- Parallel: %s seconds ---" % (time.time() - start_time))

The result was (run 1):

--- Sequential: 3.44234037399292 seconds ---
--- Parallel: 1.0258424282073975 seconds ---

And a second run gave similar results:

--- Sequential: 3.6472291946411133 seconds ---
--- Parallel: 1.0090117454528809 seconds ---

If parallel is much slower for you, my thoughts are:

csaiedu commented 3 years ago

Thank you. I was able to fix my problem with your explanations.