Open swhite2401 opened 8 months ago
Here is a notebook I use to measure the performance of multiprocessing. Can you tell how it behaves for you (with the error message if any)? It works normally on all my macOS machines, I'll try it on linux.
Do you have more details about the "many users": location, platform, versions… Just to try and find some common properties.
I do not have details, but the PR originates from complaints from CERN... @simoneliuzzo may know more about it. For now unix users do not complain since the default is fork, but this may change in the future...
@carmignani , @simoneliuzzo I think you are the only extensive MAC users here. Could you test it on your side as well?
I repeated my test on linux ("grappa" machine), and I found indeed strange results:
for tracking 100 particles over 1000 turns on the hmba
test lattice:
use_mp=False: 4.25 s
use_mp=True, "fork": 1.23 s (x4.45 faster)
use_mp=True, "spawn": 1.74 s (x2.44 faster)
This makes sense given the number of cores and the fact that "spawn" is slower than "fork". "spawn" has to pickle all the input arguments (ring
and pin
), send them to the subprocess and unpickle them before running.
use_mp=False: 4.75 s (slightly slower than my old Mac) use_mp=True, "fork": 0.27 s (x17 faster: very nice) use_mp=True, "spawn": 6.21 s (slower than single processing!) I cannot understand the performance of the spawn method but it makes makes it unusable.
So I agree that there is a problem, but it seems to be linked to the operating system…
@swhite2401: does anyone get errors, or just performance problems?
On grappa, running a script in the terminal I get the usual error:
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Could you try running in the terminal instead of using jupyter?
This error by the way does not make sense because on unix freeze_support()
does nothing
Could you try running in the terminal instead of using jupyter?
In fact, I could not use Jupyter (new problem…), so I copied line by line my notebook into a terminal.
On grappa, I am running in a virtual environment based on /usr/bin/python3
:
grappa:~ $ /usr/bin/python3
Python 3.8.5 (default, Jul 28 2020, 12:59:40)
[GCC 9.3.0] on linux
and I installed AT exactly as documented:
cd <my working copy of AT>
pip install -e .
So I run the exact same commands in a python prompt (just like you did) it works.
If I use python myscript.py
it fails...
Very strange. I get exactly the same problem.
But a very simple modification of the script works:
import sys
import at
import numpy as np
if sys.version_info.minor < 9:
from importlib_resources import files, as_file
else:
from importlib.resources import files, as_file
from time import time
def main():
np.set_printoptions(linewidth=120)
np.set_printoptions(precision=12)
fname = 'hmba.mat'
with as_file(files('machine_data') / fname) as path:
ring = at.load_lattice(path)
ring.disable_6d()
sigm = at.sigma_matrix(ring.enable_6d(copy=True))
npart = 100
nturns = 1000
pin = at.beam(npart, sigm)
print(pin.shape)
print("Tracking starts")
t0 = time()
pout1, _, td1 = ring.track(pin, nturns=nturns, losses=True, use_mp=True, start_method="spawn")
print(time() - t0)
print("Tracking starts")
t0 = time()
pout2, _, td2 = ring.track(pin, nturns=nturns, losses=True)
print(time() - t0)
np.testing.assert_equal(td1['loss_map'], td2['loss_map'])
np.testing.assert_equal(pout1, pout2)
if __name__ == '__main__':
main()
With this, it works normally both on macOS and linux. On top, I get much better results on grappa (with spawn):
(test38) grappa:python $ python mp.py
h, v
(6, 100)
Tracking starts
1.8297901153564453
Tracking starts
4.697385549545288
So there is a solution, but I do not understand why encapsulating the code in a function solves the problem.
The script of @lfarv for me works and gives this output h, v (6, 100) Tracking starts 1.8296749591827393 Tracking starts 4.353475093841553
Probably we cannot do much more than improving the documentation, we could possibly catch the RuntimeError and also give an error message with some instructions. Any better ideas?
3 methods are available for the python python multiprocessing:
The default for unix will also be set to spawn starting at python 3.13. Many users have reported errors using the spawn method on MAC and unix systems. This problem is not understood and could become a severe issue if the fork method is deprecated in the future.