Bioconductor / BiocParallel

Bioconductor facilities for parallel evaluation
https://bioconductor.org/packages/BiocParallel
65 stars 29 forks source link

strategy of tasks in MulticoreParam #260

Closed z5ouyang closed 4 months ago

z5ouyang commented 4 months ago

This is more of a question which I cannot find answers anywhere. Please do share a link if it is answered some where. It seems if tasks is set to the length of X, the parallel is done in batches? Is the strategy like this: If X is 1:9, and set workers=3 and tasks=9, parallel will start with 1, 2, 3, until all 1, 2, 3 completed, then 4,5,6 will start, and until all 4,5,6 completed, 7,8,9 will start.

Am I understanding this correctly? I originally thought (isn't this more efficient?), it would start with 1,2,3, if 2 completed, 4 would start, and the if 3 completed, 5 would start, and if 4 completed, 6 would start, etc .. (This seems not the case)

Thank you!

mtmorgan commented 4 months ago

We try to explain this on the help page for, e.g., ?MulticoreParam under the tasks: argument, e.g.,

          A 'tasks' value of > 0 specifies the exact number of tasks.
          Values can range from 1 (all of 'X' to a single worker) to
          the length of 'X' (each element of 'X' to a different
          worker).

With 3 workers and 7 tasks for a vector of length 7, tasks 1, 2, 3 are sent to workers 1, 2, 3. When any of those, e.g., task 3, finishes, task 4 is sent, then 5, .... This is a 'round robin' and provides an effective way of parallelizing tasks whose completion time is unknown -- workers are always busy until all tasks have been assigned.

f = function(i) { Sys.sleep(3 - (i - 1) %% 3); message(i); i }
x = bplapply(1:7, f, BPPARAM = MulticoreParam(tasks = 7))
## 3
## 6
## 2
## 5
## 1
## 4
## 7
z5ouyang commented 4 months ago

Thanks for the reply! What your description is what I expect. But when I check the the CPU usage it is not like that. I am reading a few large files. It all starts with using the number of cores as set. Then it gradually reduced to 1 core, after a long while, cores as set would be busy again. I also did print out the the start time of each task it shows my suspect:

        Reading tmp_001.h5.rds  2/15 @2024-04-28 01:18:07.025076
        Reading tmp_000.h5.rds  1/15 @2024-04-28 01:18:07.017021
        Reading tmp_002.h5.rds  3/15 @2024-04-28 01:18:07.029634
        Reading tmp_003.h5.rds  4/15 @2024-04-28 01:18:07.033154
        Reading tmp_004.h5.rds  5/15 @2024-04-28 01:18:07.038072
        Reading tmp_005.h5.rds  6/15 @2024-04-28 01:18:07.042327
        Reading tmp_007.h5.rds  8/15 @2024-04-28 01:18:07.054688
        Reading tmp_008.h5.rds  9/15 @2024-04-28 01:18:07.062468
        Reading tmp_009.h5.rds  10/15 @2024-04-28 01:18:07.069899
        Reading tmp_010.h5.rds  11/15 @2024-04-28 01:19:42.757794
        Reading tmp_006.h5.rds  7/15 @2024-04-28 01:18:07.048538
        Reading tmp_014.h5.rds  15/15 @2024-04-28 01:22:47.852366
        Reading tmp_011.h5.rds  12/15 @2024-04-28 01:22:44.139969
        Reading tmp_012.h5.rds  13/15 @2024-04-28 01:22:45.37573
        Reading tmp_013.h5.rds  14/15 @2024-04-28 01:22:46.614078

(I setup 10 cores), you can see first 1-10 (1~10/15) started at the same time, then one (2 I think) finished, then 11 started. And then core usage reduced to 1. And all of sudden 12,13,14,15 started at the same time.

Please let me know if you needs more information. Thanks again for checking!

mtmorgan commented 4 months ago

I'm guessing that each worker returns a large amount of data.

BiocParallel checks how many cores are finished, then retrieves all those that are before starting the next. When process 2 finishes, the manager retrieves these results (which takes a long time) then starts 11, then checks and sees that 4 more workers have finished. These results are retrieved, which takes a very long time. During this period all workers, including 11, finish, and the only process doing any work is the manager, retrieving the 4 results. Once these are retrieved, four processes are started, and tha manager sees that 6 processes are finished...

Data transfer from worker to manager is the bottleneck in terms of performance. Perhaps parallel evaluation does not help in this context, or perhaps you can revise your tasks so that each does more processing and a smaller amount of data is returned to the manager.

z5ouyang commented 4 months ago

Thank you so much for the explanation!