Closed makmanalp closed 4 months ago
Same issue here
Thanks for the note. I dug in and there are basically two problems with this particular example:
I've modified your code to spend some compute time in fun
and also to use a context manager for Pool
so it cleans up after itself.
import multiprocessing as mp
def fun(args):
x = 0
for i in range(100):
x += 1
return args
if __name__ == "__main__":
with mp.Pool(mp.cpu_count()) as p:
r = p.map(fun, range(1000000))
print(sum(x for x in r))
print(((1000000-1) * 1000000) / 2)
This is the result using the repository version (on a Mac running Sonoma, though I don't think it should matter). As you can see, most of the time is now correctly attributed to code in fun
.
Describe the bug
Scalene does not seem to collect profiling data from subprocesses forked by multiprocessing.
To Reproduce
Run this:
in scalene via
python3 -m scalene scalenetest.py
Expected behavior
Expect to see line profiler show stats from inside the function
fun()
, gathered from the subprocesses. What happens instead is that I get a profile that shows that the bulk of the time is spent in the main process waiting for the subprocesses to complete.Screenshots
Desktop (please complete the following information):
Additional context Add any other context about the problem here.