soft-matter / trackpy

Python particle tracking toolkit
http://soft-matter.github.io/trackpy
Other
441 stars 131 forks source link

RunTimeError on trackpy.batch #701

Open Alex-code-lab opened 2 years ago

Alex-code-lab commented 2 years ago

Hello,

I just began to use trackpy to follow cells during migration. unfortunately, an error occurs when I run the code. It runs in loop because of a runtimeerror on the tp.batch and I have no idea why. Here is the error I get :

File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "/Users/alexandre/opt/anaconda3/lib/python3.9/runpy.py", line 268, in run_path return _run_module_code(code, init_globals, run_name, File "/Users/alexandre/opt/anaconda3/lib/python3.9/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "/Users/alexandre/opt/anaconda3/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/alexandre/Desktop/motility/testPims.py", line 62, in f = tp.batch(frames[:], 13, minmass=MinMass_calc, invert=True); File "/Users/alexandre/opt/anaconda3/lib/python3.9/site-packages/trackpy/feature.py", line 549, in batch pool, map_func = get_pool(processes) File "/Users/alexandre/opt/anaconda3/lib/python3.9/site-packages/trackpy/utils.py", line 412, in get_pool pool = Pool(processes=processes) File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/pool.py", line 212, in init self._repopulate_pool() File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/pool.py", line 303, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/pool.py", line 326, in _repopulate_pool_static w.start() File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in init super().init(process_obj) File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/popen_fork.py", line 19, in init self._launch(process_obj) File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 42, in _launch prep_data = spawn.get_preparation_data(process_obj._name) File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "/Users/alexandre/opt/anaconda3/lib/python3.9/multiprocessing/spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an `executable.`

and here is the code I'm using :

import numpy as np
from scipy import ndimage
from skimage import io
import os
import matplotlib.pyplot as plt
# from skimage.segmentation import circle_level_set, inverse_gaussian_gradient, morphological_geodesic_active_contour
import glob
# import re
import skimage
# from math import ceil´
import pims
import trackpy as tp
#%%
# =============================================================================
#        Main 
# =============================================================================
#Importing frames
frames = pims.ImageSequence('/Users/alexandre/Desktop/motility/data_mot/*.tif')#, as_grey=True)
plt.imshow(frames[0])
#Basic localisation of the particles to find a minmass
f_0 = tp.locate (frames[0],13, minmass=None, maxsize=None ,separation=None, noise_size=1, smoothing_size=None, threshold = None, invert = True, percentile= 64,topn=None, preprocess=(True), max_iterations=10, filter_before=None, filter_after=None,characterize=True, engine ='auto')
MinMass_calc = np.percentile(f_0['mass'],97)
print(MinMass_calc)
#%%
plt.figure()
tp.subpx_bias(f_0)
#%%
#Definitive localisation of the particles 
f = tp.locate (frames[0],13, minmass=MinMass_calc, maxsize=None ,separation=None, noise_size=1, smoothing_size=None, threshold = None, invert = True, percentile= 64,topn=None, preprocess=(True), max_iterations=10, filter_before=None, filter_after=None,characterize=True, engine ='auto')
plt.figure()  # make a new figure
tp.annotate(f, frames[0])

# %%
fig, ax = plt.subplots()
ax.hist(f['mass'], bins=20)
# Optionally, label the axes.
ax.set(xlabel='mass', ylabel='count')
print(f['mass'].mean()/2)

#%%
#following the particles 
f = tp.batch(frames[:], 13, minmass=MinMass_calc, invert=True)

Do you have an idea to solve this issue ?

Thank you for your help!!

Bye!

nkeim commented 2 years ago

This is because of a subtle issue involving Python's multiprocessing module on Windows. If you are unsure of how to follow the advice, a search of the old issues in this repository, or a Google search of the internet, will turn up some help. The workaround in trackpy is to use the processes=1 argument to batch to turn off multiprocessing altogether.

Alex-code-lab commented 2 years ago

Thank you @nkeim for your answer. I'm running it on a macBook Air, might be the same problem as with Windows. I will check this and try to turn it off tomorrow.

jacopoabramo commented 2 years ago

Hi @Alex-code-lab , I'm noticing that from the traceback the following lines are reported:

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an `executable.`

This happened to me too while running some other code using the multiprocessing package (I use Windows). If you're planning to use it without using trackpy.batch, you might want to wrap your script in the if condition if __name__ == "__main__":. This won't fix the issue with the batch call but it'll avoid problems when running parallel code.

Alex-code-lab commented 2 years ago

Hi @jacopoabramo, Thank you for your answer. I diged the multiprocessing library (not so hard ahah) and I found that on MacOs since python 3.8, the multriprocessing method using by default is the 'spawn' one. By simply writing : import multiprocessing multiprocessing.get_start_method()

should appear 'spawn" Then, I simply wrote : multiprocessing.set_start_method('fork', True) and now the code runs. It might also help you!

Thank you for your helps!