ropensci / drake

An R-focused pipeline toolkit for reproducibility and high-performance computing
https://docs.ropensci.org/drake
GNU General Public License v3.0
1.34k stars 129 forks source link

Remove all non-clustermq parallel backends? #561

Closed wlandau closed 5 years ago

wlandau commented 6 years ago

Background

Currently, drake offers a multitude of parallel computing backends.

parallelism_choices()
##  [1] "clustermq"            "clustermq_staged"     "future"              
##  [4] "future_lapply"        "future_lapply_staged" "hasty"               
##  [7] "Makefile"             "mclapply"             "mclapply_staged"     
## [10] "parLapply"            "parLapply_staged"

Over the last two years, the number of backends grew and grew because I had so much to learn about high-performance computing. And I still do. But now, users have unnecessarily complicated choices to make, and it is difficult to develop and maintain so many backends.

The more I learn, the more "clustermq" seems like the best of all worlds for drake.

  1. The clustermq package can deploy locally with the "multicore" option and remotely to most common schedulers.
  2. Overhead is very low, even comparable to drake's non-staged multicore backends. Thanks to clustermq, initialization, interprocess communication, and load balancing appear very fast.
  3. We may not need future after all. Yes, the future ecosystem is amazingly powerful and flexible, and https://github.com/HenrikBengtsson/future/issues/204 could potentially even provide staged clustermq-based parallelism. However, I do wonder about a couple things.
    1. How much value do future's non-clustermq backends still provide here? Is there still a need for batchtools-based HPC?
    2. For directed acyclic graphs (DAGs) of non-embarrassingly-parallel jobs, it is important to have full access to a pool of automatically load-balanced persistent clustermq workers so drake can submit new jobs as soon as their dependencies finish and a worker becomes available. (Relevant: https://github.com/mschubert/clustermq/issues/86#issuecomment-401358553 and https://github.com/mschubert/clustermq/issues/86#issuecomment-401461495.) Does future allow this?

Proposal

For drake version 7.0.0 – which I hope to release in the first half of 2019 – let's think about removing all parallelism choices except "clustermq". (And let's keep "hasty" mode too, which is just an oversimplified a clone of "clustermq". It's not much code, and it's a great sandbox for benchmarking).

Benefits

Removing superfluous parallel backends will simplify the user experience. Users will no longer be overwhelmed by all the parallel computing choices and having to figure out which one is right for them. In addition, the code base and test suite will be considerably smaller, simpler, leaner, cleaner, faster, easier to maintain, more reliable, and more attractive to potential collaborators.

Your help

My goals for late 2018 are to

  1. Assess the feasibility of this change.
  2. If the change is a good idea, ensure the prerequisites for development are in place.

I would sincerely appreciate any input, advice, help, and participation you are willing to lend.

How will this affect you as a user?

Do you rely on other backends? Having problems with make(parllelism = "clustermq")? Let's talk. I will personally help you transition.

What am I missing?

Are there use cases that are inherently more difficult for clustermq than the other backends? The existing backends have different strengths and weaknesses, and I want to leave time for discussion before assuming clustermq is a one-size-fits-all solution.

Related issues

From a development perspective, the chief obstacles seem to be

Here is a more exhaustive list of relevant issues.

cc

These are some of the folks involved in earlier discussions about drake's high-performance computing. Please feel free to add more. I probably forgot many.

mschubert commented 6 years ago

Rather than using the current clustermq backend, does it make sense to come up with a solution for efficient distribution/caching of data shared between multiple calls and move drake in that direction? (I actually thought about contacting you for a possible R Consortium application, but had too much other work to fit it in this round)

Happy to discuss this in more detail when I'm back from vacation/traveling in 3 weeks.

wlandau commented 6 years ago

I would love to discuss the details when you are back from vacation. I will actually be on a vacation myself from November 15 through 26, so maybe late November or early December would be a good time to follow up.

My own preference at the moment is to pursue both options simultaneously. I know, this is the sort of thinking that that gave drake too many parallel backends, but I see value in both, and I am not sure they are mutually exclusive. What you describe seems like a higher-risk, higher-reward effort that could replace clustermq in drake further down the road. In the nearer future, it is trivially easy on the implementation side to get rid of superfluous non-clustermq backends, and I believe this would solve immediate problems in development, testing, collaboration, and general ease of use.

Also, #384, #498, and this article could be relevant to your idea to the extent that storage is related to caching.

idavydov commented 6 years ago

I started using drake with batchtools mainly because zeromq is an extra dependency. It turned out that we have it installed on the cluster. So eventually I switched to clustermq, and it seems to work great.

In my opinion, the only problem with having clustermq only is the fact that some clusters might not have zeromq installed. On top of that, even if you manage to compile zeromq from sources, if you have some organization-wide installation of Rstudio, it might be difficult to have all the environment variables setup correctly to use a locally-installed version.

Another consideration is the following: persistent workers are great but for large clusters, it is often considered a good practice to have shorter jobs rather than longer ones. So in case of large drake plans and clusters with a high load, it is probably good to keep an option of having transient workers. Maybe, even exploiting cluster dependency management.

Perhaps, the simplest way to do this is through Makefiles?

bart1 commented 6 years ago

A quick note from my side, I just gave it a try. I needed to install zeromq from source which was surprisingly easy due to having no dependency issues. Afterwards I got the example running without any trouble (on slurm).

not knowing much about how clustermq works I do not know how it deals with longer jobs. Since jobs on the cluster have a limited run time, and it seems clustermq reuses existing instances sometimes I would probably want to limit this behaviour to start new jobs. To avoid jobs being killed by the cluster. I guess the reuse argument to clustermq::workers is meant for that.

An other question I have is if there is an option to run multiple jobs in one or more larger jobs cluster jobs. Previous I have used one 64 core tasks to run many different tasks.

I guess both these concerns can be addressed by updating the template file to call workers with reuse=FALSE and make multiple parallel calls to workers using something like mclapply(1:64, function(x){clustermq("{master}")}) in the slurm template (untested)

HenrikBengtsson commented 6 years ago

A quick note from my side, I just gave it a try. I needed to install zeromq from source which was surprisingly easy due to having no dependency issues.

I can second this. I've verified that installing ZeroMQ as a non-privileged user from source on a RHEL 6.6(!) cluster went smoothly following the (traditional) installation instruction;

curl -L -O https://github.com/zeromq/libzmq/releases/download/v4.2.5/zeromq-4.2.5.tar.gz
tar xvf zeromq-4.2.5.tar.gz
./configure --prefix=/path/to/zeromq-4.2.5
make
make install

After this, 'rzmq' and 'clustermq' installed like a charm on a fresh R 3.5.1 setup.

Lots of HPC cluster run these older versions of RHEL/CentOS. I think there are folks out there stuck with RHEL 5 as well. Being able to support those is important. ("You need to update your OS", is not useful feedback for users on such environment.)

wlandau commented 6 years ago

Thanks for chiming in. From your perspective, it sounds like the main issues are (1) persistent workers and (2) compatibility edge cases with ZeroMQ. https://github.com/mschubert/clustermq/issues/101 is solvable, but to Henrik's point, OS portability is key. All this makes me think we might keep the "future" backend as well as "clustermq" and "hasty". Those three backends are fewer than the existing 11, and together they cover a lot.

wlandau commented 6 years ago

@idavydov

Perhaps, the simplest way to do this is through Makefiles?

make(parallelism = "Makefile") was actually drake's first parallel backend. In fact, my original intention was to offload to GNU Make as much as possible. Now, however, I think drake has grown independent of Make and we no longer need the "Makefile" backend.

violetcereza commented 6 years ago

I haven't been using drake much recently but as I said in #126, I used Makefile parallelism because I really liked the ability to kill jobs (from htop) without affecting my main R session. This worked with my workflow, where I rapidly iterate on my scripts. I'm not sure if this is a reasonable request, but I'm wondering if clustermq has a means for easily viewing the status of various jobs and killing them.

wlandau commented 6 years ago

With clustermq and its persistent workers, you unfortunately would not be able to kill individual jobs. But with the "future" backend (which is back on the table) you might, especially if there is some way to expose and broadcast suggestive job names in batchtools template files.

Related: https://github.com/HenrikBengtsson/future/issues/93

krlmlr commented 6 years ago

I love the idea of fewer choices and a "universally good" solution, haven't had the chance to try clustermq yet.

wlandau commented 6 years ago

@dapperjapper, in 1ae4528ce9d006d258b8eaa547f13b690cfce5f1, I supplied the target name to the label argument of future(). So for those of us who use computing clusters and future.batchtools, as long as we use job.name in our batchtools template files, we see the names of our targets in job-monitoring utilities like qstat. As for local multisession/multicore parallelism, I do not know if it is possible to post informative job names that get picked up by htop.

wlandau commented 6 years ago

Related: https://github.com/HenrikBengtsson/future.batchtools/issues/8, https://github.com/HenrikBengtsson/future/issues/96.

wlandau commented 6 years ago

@dapperjapper, using https://github.com/HenrikBengtsson/future.callr/commit/a8774795f33204fdf44e957843890ac283a62571 and https://github.com/ropensci/drake/commit/1ae4528ce9d006d258b8eaa547f13b690cfce5f1, you can now see the targets corresponding to processes in htop. Example: https://github.com/HenrikBengtsson/future/issues/96#issuecomment-437708937. We are one step closer to the safe removal of Makefile parallelism.

violetcereza commented 6 years ago

Fantastic work! Thank you!

wlandau commented 5 years ago

Okay, unless someone convinces me otherwise in the meantime, I will plan to downsize to just "clustermq", "future", and "hasty" for version 7.0.0. I will likely start work on this after a couple conferences in January (one of which is RStudio::confg(2019)).

kendonB commented 5 years ago

I have now tried the latest versions of both future and clustermq for some large embarassingly parallel jobs (a large spatial intersection operation of two sets of around 500000 polygons).

future with batchtools or multicore takes far too long to get going to be useful for large sets of quick jobs. future seems to be a fine option if the user puts in the work to match the number of futures to the number of workers (i.e. each call to future contains a target that runs a loop). A great feature is the ability to match resources to targets.

clustermq seems to be working great, except that there is still no way to match resources to targets. One could imagine having teams of different types of workers which would appear once they're needed and disappear when they're no longer needed.

I think that there are masses of wasted resources on clusters today, when people are overallocating resources to perform operations that don't need them. Having no option (other than manually staging) to allocate resources to targets will only result in more wastage on clusters.

wlandau commented 5 years ago

Thanks for giving these backends a try.

future with batchtools or multicore takes far too long to get going to be useful for large sets of quick jobs.

Fortunately, clustermq has a multicore option too: options(clustermq.scheduler = "mutlicore").

Job groups, https://github.com/ropensci/drake/issues/561#issuecomment-433642782, and/or #304 are possible long-term solutions. drake has always struggled with large numbers of small targets, and I am hoping to solve this problem at some point. First, though, drake needs a smaller/cleaner code base, and that means downsizing the number of parallel backends.

clustermq seems to be working great, except that there is still no way to match resources to targets. One could imagine having teams of different types of workers which would appear once they're needed and disappear when they're no longer needed.

Or one team of persistent heterogeneous workers and appropriate load balancing to accommodate. I think this may be a good clustermq issue. https://github.com/mschubert/clustermq/issues/81 might also be related.

HenrikBengtsson commented 5 years ago

future with batchtools or multicore takes far too long to get going to be useful for large sets of quick jobs.

Admitting I've been out of the loop for a while, but didn't drake have a "future.apply" backend for the purpose of doing achieving balancing (on top of the core Future API provided by future)?

kendonB commented 5 years ago

Yes. It's the "future_lapply_staged" option, which has worked pretty well for embarrassing parallel jobs. I may still lobby Will to support it as an option.

mschubert commented 5 years ago

Or one team of persistent heterogeneous workers and appropriate load balancing to accommodate. I think this may be a good clustermq issue.

This would need the whole infrastructure of assigning each call a memory and time cost, constructing dependency graphs, and then distributing tasks according to maximize resource usage.

This, to me, sounds rather like a job for a workflow package than a scheduler interface package, so I'm unlikely to support this directly.

But I'm happy to adapt the worker API for e.g. drake to use it that way, or come up with a good glue like https://github.com/ropensci/drake/issues/561#issuecomment-433642782 suggests (I haven't forgotten about this).

Admitting I've been out of the loop for a while, but didn't drake have a "future.apply" backend for the purpose of doing achieving balancing (on top of the core Future API provided by future)?

I also have to read up on how the different future approaches work in detail, and consider putting https://github.com/HenrikBengtsson/future/issues/204 a bit higher up on the priority list (or is anyone interested in taking a stab at this?).

wlandau commented 5 years ago

It may be time for drake to allow custom/external parallel backends. I am thinking about make(parallelism = your_scheduler_function), where your_scheduler_function() takes a drake_config() object and checks/builds all the targets. We could take the current internal run_future_lapply_staged() and put it in it's own package, similarly to how packages future.batchtools and future.callr extend future. We could also make extension easier by standardizing and exposing drake functions for building, checking, and storing targets. With all the changes planned for 7.0.0, I think we are coming up on the right time. Implementation may not start immediately, though. My January will be packed with other stuff, including 2 conferences.

wlandau commented 5 years ago

https://github.com/ropensci/drake/issues/561#issuecomment-450052249 could also pave the way for #575.

wlandau commented 5 years ago

Re https://github.com/ropensci/drake/issues/561#issuecomment-449975270, point well taken. drake needs to keep track of which targets need which resources, and of course, the timing of how the targets are deployed. However, I think drake may need help from clustermq in order to spawn a pool of heterogenous workers, and for each target, exclude workers that do not meet the resource requirements.

At the interface level, it would be great to combine https://github.com/HenrikBengtsson/future/issues/204 and https://ropenscilabs.github.io/drake-manual/hpc.html#the-resources-column-for-transient-workers. make(parallelism = "future") already recruits the optional resources argument of future.batchtools evaluators. And depending on efficiency, https://github.com/HenrikBengtsson/future/issues/204 might allow drake to fold make(parallelism = "clustermq") right into make(parallelism = "future"). The simplicity would be really nice.

wlandau commented 5 years ago

Given the way things are going, I think the present issue will be solved with the following. Checked items are implemented in the 561 branch.

wlandau commented 5 years ago

After another look at the code base, I no longer think it is a good idea to officially support custom backends in external packages because it would require exposing too many sensitive internals. That said, I will still open a back door for experimentation: make(parallelism = your_scheduler_function). Caveats:

  1. This is really a sandbox similar to hasty mode, so drake will always throw a warning.
  2. To get at the necessary internals, you will need to use :::.

I think this approach could

  1. Make it easier for others to help with #575.
  2. Help advanced HPC users aggressively optimize scheduling for their computing resources.
wlandau commented 5 years ago

Let's externally offload the unofficial backends through this backdoor: hasty mode and "future_lapply_staged" parallelism.

wlandau commented 5 years ago

I believe all the immediate issues are fixed. @kendonB, "future_lapply_staged" parallelism is now available through https://github.com/wlandau/drake.future.lapply.staged. Likewise, hasty mode is offloaded to https://github.com/wlandau/drake.hasty. @mschubert, let's follow up separately about https://github.com/ropensci/drake/issues/561#issuecomment-433642782 and https://github.com/HenrikBengtsson/future/issues/204.

HenrikBengtsson commented 5 years ago

With the risk of being "fluffy", here are some quick comments and clarifications related to the future framework and what I think is the gist of this thread:

brendanf commented 5 years ago

A quick note from my side, I just gave it a try. I needed to install zeromq from source which was surprisingly easy due to having no dependency issues.

I can second this. I've verified that installing ZeroMQ as a non-privileged user from source on a RHEL 6.6(!) cluster went smoothly following the (traditional) installation instruction;

curl -L -O https://github.com/zeromq/libzmq/releases/download/v4.2.5/zeromq-4.2.5.tar.gz
tar xvf zeromq-4.2.5.tar.gz
./configure --prefix=/path/to/zeromq-4.2.5
make
make install

After this, 'rzmq' and 'clustermq' installed like a charm on a fresh R 3.5.1 setup.

Lots of HPC cluster run these older versions of RHEL/CentOS. I think there are folks out there stuck with RHEL 5 as well. Being able to support those is important. ("You need to update your OS", is not useful feedback for users on such environment.)

@HenrikBengtsson I managed to install zeromq on my university cluster (running CentOS 7.6.1810) using the commands you listed, but I'm having trouble getting rzmq to load properly; can you spell out in more detail where the zeromq libraries should be installed and/or what environmental variables need to be set for this to work?