Closed mawi363 closed 4 years ago
You should use alg.optimize() rather than alg.optimize_parallel(). Since you set num_cpus=4, it will run in parallel. As you have already observed firsthand, the function .optimize_parallel() is an advanced function of the "use at your own risk" kind.
Thanks for the Quick response. Sadly setting: num_cpus=4 and using alg.optimize() shows the same behaviour.
any other Ideas ? Is there perhaps a working example for an optimization thats running in parallel available? so i can test the rest of my setup ?
The parallel optimization is based on the multiprocessing module, which imports all your modules for each thread. If you have includes that don't resolve properly, or if importing one module causes the optimization to start over, it may not work.
Where is the objective function defined? Make sure the corresponding module is properly wrapped with:
if name == 'main':
otherwise multiprocessing will keep loading your 'main' function indefinitely.
oops, bad syntax. Imean:
if __name__ == '__main__'
I tried it using the minimal example but i still run into the same problem: a brief spike in cpu usage than it drops down and starts to idle
import rbfopt
import numpy as np
def obj_funct(x):
return x[0]*x[1] - x[2]
if __name__ == '__main__':
bb = rbfopt.RbfoptUserBlackBox(3, np.array([0] * 3), np.array([10] * 3),
np.array(['R', 'I', 'R']), obj_funct)
settings = rbfopt.RbfoptSettings(max_evaluations=500, num_cpus=4)
alg = rbfopt.RbfoptAlgorithm(settings, bb)
val, x, itercount, evalcount, fast_evalcount = alg.optimize()
any other ideas ? maybe a mistake during setup ? I am fairly new at Python so it could be something obvious...
The example that you wrote above should work, and in fact it does work: I just tried it on a fresh installation. It works from console as well as when executed as a script.
Are you not seeing any output at all? I suspect it may be a problem in your installation but I am not sure what.
I am seeing a little output. It builds the chart for the output data but that's it. The program seem to get stuck during the first initialization step. I tried different numbers of cores but everything above 1 runs into the same problem.
Iter Cycle Action Objective value Time Gap
---- ----- ------ --------------- ---- ---
I tried leaving it running for a while but even after 5min runtime nothing changes (using a single core it barely takes a second to get past the initialization steps)
Can you try to figure out what function it is spending time in? On Linux just Ctrl + C while the process is running will tell you what received the interrupt. Otherwise you could try a profiler or something like that. As I said the code snippet you posted works, so I'm really not sure what's the problem.
sorry for the long wait!
I think you are right and its an error in my installation. So i made a fresh set up on another PC (Windows 10) here is what i used:
Then i used: pip install pyomo pip install rbfopt pip install numpydoc
then using JupyterLab i ran the following code:
import rbfopt
import numpy as np
import rbfopt
Path1= r'C:\Users\PC2\Downloads\bonmin-win64\bonmin'
Path2= r'C:\Users\PC2\Downloads\ipopt-win64\ipopt'
def obj_funct(x):
return x[0]*x[1] - x[2]
if __name__ == '__main__':
bb = rbfopt.RbfoptUserBlackBox(3, np.array([0] * 3), np.array([10] * 3),
np.array(['R', 'I', 'R']), obj_funct)
settings = rbfopt.RbfoptSettings(max_evaluations=500, num_cpus=4,minlp_solver_path=Path1, nlp_solver_path=Path2)
alg = rbfopt.RbfoptAlgorithm(settings, bb)
val, x, itercount, evalcount, fast_evalcount = alg.optimize()
Result is the same problem as before: the code works with num_cpus=1 but frezzes in num_cpus=4:
This is the output when it freezes:
Iter Cycle Action Objective value Time Gap
---- ----- ------ --------------- ---- ---
now in difference to anaconda winpython also leaves the comand prompt (i think) open so i can finally see a real error message. This is the readout:
[I 05:07:25.073 LabApp] Build is up to date
[I 05:07:26.120 LabApp] Kernel started: XXXXX I deleted this because i didn't know what is was and it kinda looked like my mac address XXXXX
Process SpawnPoolWorker-2:
Traceback (most recent call last):
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\pool.py", line 114, in worker
task = get()
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\queues.py", line 358, in get
return _ForkingPickler.loads(res)
AttributeError: Can't get attribute 'obj_funct' on <module '__main__' (built-in)>
Process SpawnPoolWorker-3:
Traceback (most recent call last):
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
Process SpawnPoolWorker-4:
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\pool.py", line 114, in worker
task = get()
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\queues.py", line 358, in get
return _ForkingPickler.loads(res)
Traceback (most recent call last):
AttributeError: Can't get attribute 'obj_funct' on <module '__main__' (built-in)>
Process SpawnPoolWorker-1:
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\pool.py", line 114, in worker
task = get()
Traceback (most recent call last):
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\queues.py", line 358, in get
return _ForkingPickler.loads(res)
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
AttributeError: Can't get attribute 'obj_funct' on <module '__main__' (built-in)>
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\pool.py", line 114, in worker
task = get()
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\queues.py", line 358, in get
return _ForkingPickler.loads(res)
AttributeError: Can't get attribute 'obj_funct' on <module '__main__' (built-in)>
Process SpawnPoolWorker-8:
Traceback (most recent call last):
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\pool.py", line 114, in worker
task = get()
File "C:\Users\PC2\Downloads\WPy64-3830\python-3.8.3.amd64\lib\multiprocessing\queues.py", line 358, in get
return _ForkingPickler.loads(res)
AttributeError: Can't get attribute 'obj_funct' on <module '__main__' (built-in)>
[I 05:09:26.067 LabApp] Saving file at /Untitled.ipynb
Is this useful ? If not could you give me a rundown how you set up your installation ?
Yes, this is very useful. It is definitely an error of the multiprocessing module in Windows. You should be able to fix it following the suggestion reported here: https://stackoverflow.com/questions/41385708/multiprocessing-example-giving-attributeerror/42383397
basically you need to define obj_funct in a separate, importable module.
Separating the obj_funct into its own module and than importing it worked !
Thank you for helping me !!
Hello,
I am using RBFopt to optimize a blackbox function with expensive evaluations. It also has a large response surface so i need a lot of evaluations. I have implemented the optimization using the regular 'optimize' function and it works without problems.
Now i am trying to use the 'parallel_optimization' function to speed up my optimization by using 4 cores.
I tried it like this :
I also set :
os.environ['OMP_NUM_THREADS'] = '1'
Starting the optimization i see the CPU usage spike for a few seconds, then it falls down to 3-5% and the programm seems to idle/freeze but i get no error message.
Any ideas what i do wrong ? Appreciate any help.