pwollstadt / IDTxl

The Information Dynamics Toolkit xl (IDTxl) is a comprehensive software package for efficient inference of networks and their node dynamics from multivariate time series data using information theory.
http://pwollstadt.github.io/IDTxl/
GNU General Public License v3.0
237 stars 76 forks source link

Proposal to improve CPU multiprocessing performance #103

Open kuffmode opened 7 months ago

kuffmode commented 7 months ago

Hi all, I realized the multi-threading performance is good, but probably can be even better for local machines. I noticed that my CPU cores are not engaged fully so a simple solution that I usually use and found to be helpful here is joblib parallel processing. At its core, all it needs is something like this:

results = Parallel(n_jobs=-1)(
               delayed(network_analysis.analyse_single_target)(
               settings=settings, data=data,target=node) for node in range(n_nodes))

But of course, it can be more user-friendly if this is wrapped in a function, something like an interface where we just tell how many jobs, what to do, some kwargs for the function and potentially some kwargs for the parallel processing backend. This way, each core is occupied with one single_target analysis, so as in my case, it can help a lot with the performance. About joblib, we used it in our own library and compared to some fancier things like dask and ray it actually is a lot better and less pain! So far, it never broke anything for us.

mwibral commented 7 months ago

Dear Kayson,

indeed java multithreading suturates somewhere above 10-12 threads, becuase it operates within a single dataset. For acceleration there is a branch on git using MPI. As long as you stay on a single node that seems to be the most efficient way to go. The branch needs some extra unit tests, but in general you could give it a try, it should basically work, given you have some minimal familiarity with calling based programs.

For scaling actoss multiple nodes in a cluster some more changes will have to be made, but we#re actively working on that right now.

Best Michael

On Mon, 2023-12-04 at 08:10 -0800, Kayson Fakhar wrote:

Hi all, I realized the multi-threading performance is good, but probably can be even better for local machines. I noticed that my CPU cores are not engaged fully so a simple solution that I usually use and found to be helpful here is joblib parallel processing. At its core, all it needs is something like this: results = Parallel(n_jobs=-1)(                delayed(network_analysis.analyse_single_target)(                settings=settings, data=data,target=node) for node in range(n_nodes)) But of course, it can be more user-friendly if this is wrapped in a function, something like an interface where we just tell how many jobs, what to do, some kwargs for the function and potentially some kwargs for the parallel processing backend. This way, each core is occupied with one single_target analysis, so as in my case, it can help a lot with the performance. About joblib, we used it in our own library and compared to some fancier things like dask and ray it actually is a lot better and less pain! So far, it never broke anything for us. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>

pwollstadt commented 7 months ago

Hi Kayson,

Thanks for sharing this, looks like a nice way to make the parallelization over targets more convenient. My proposal would be to add this as a demo script, so people can build on it. If you like, just open a pull request or send me your script and I will test/adapt it and include it.

Regarding Michael's comment, I just merged @daehrlich's implementation for MPI-supported CMI estimation into master (release v1.5). Maybe this is helpful as well.

Best, Patricia

kuffmode commented 7 months ago

Awesome, I will do it in early 2024 then. I think the advantage of joblib is that it basically doesn't need anything but the function so it will be very straight forward for people to use it. I'm not sure if it's any better compared to MPI but I think it's a good simple trick.