In computationally demanding analysis projects, statisticians and data
scientists asynchronously deploy long-running tasks to distributed
systems, ranging from traditional clusters to cloud services. The
crew.cluster
package extends the
mirai
-powered ‘crew’ package
with worker launcher plugins for traditional high-performance computing
systems. Inspiration also comes from packages
mirai
,
future
,
rrq
,
clustermq
, and
batchtools
.
Type | Source | Command |
---|---|---|
Release | CRAN | install.packages("crew.cluster") |
Development | GitHub | remotes::install_github("wlandau/crew.cluster") |
Development | R-universe | install.packages("crew.cluster", repos = "https://wlandau.r-universe.dev") |
Please see https://wlandau.github.io/crew.cluster/ for documentation, including a full function reference and usage tutorial.
First, create a controller object appropriate for your platform. For
example, to launch workers on a Sun Grid Engine (SGE) cluster, use
crew_controller_sge()
.
library(crew.cluster)
controller <- crew_controller_sge(
name = "my_workflow", # for informative job names
workers = 16,
tasks_max = 2, # to avoid reaching wall time limits
seconds_idle = 10, # to release resources when they are not needed,
# Try 16 GB memory first, then use 32 GB to retry if the worker crashes,
# then 64 GB for all subsequent retries after failure. Go back to 16 GB
# if the worker completes all its tasks before exiting.
sge_memory_gigabytes_required = c(16, 32, 64),
script_lines = "module load R" # if R is an environment module
)
controller$start()
At this point, usage is exactly the same as basic
crew
. The push()
method submits
tasks and auto-scales SGE workers to meet demand.
controller$push(name = "do work", command = do_work())
The pop()
method retrieves available tasks.
controller$pop()
#> # A tibble: 1 × 11
#> name command result seconds seed error trace warni…¹ launc…² worker insta…³
#> <chr> <chr> <list> <dbl> <int> <chr> <chr> <chr> <chr> <int> <chr>
#> 1 do work … do_work… <int> 0 1.56e8 NA NA NA 79e71c… 1 7686b2…
#> # … with abbreviated variable names ¹warnings, ²launcher, ³instance
Remember to terminate the controller when you are done.
controller$terminate()
To manage resource usage, you may choose to list and manually terminate
cluster jobs using crew_monitor_sge()
and other supported monitors.
Example for SGE:
monitor <- crew_monitor_sge()
job_list <- monitor$jobs()
job_list
#> # A tibble: 2 × 9
#> job_number prio name owner state start_time queue_name jclass_name slots
#> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <lgl> <chr>
#> 1 131853812 0.05000 crew-m… USER… r 2024-01-0… all.norma… NA 1
#> 2 131853813 0.05000 crew-m… USER… r 2024-01-0… all.norma… NA 1
monitor$terminate(jobs = job_list$job_number)
#> USER has registered the job 131853812 for deletion
#> USER has registered the job 131853813 for deletion
monitor$jobs()
#> data frame with 0 columns and 0 rows
monitor$terminate(all = TRUE)
terminates all your SGE jobs, regardless
of whether crew.cluster
created them.
crew.cluster
submits jobs over the local network using system calls
to the resource manager (e.g SGE or SLURM). Please invoke
crew.cluster
on a node of the cluster, either a login node (head
node) or a compute node.module load R
(or
module load R/x.y.z
for a specific version) in order to use on the
cluster. In crew.cluster
, you will most likely need to supply
"module load R"
or similar to the script_lines
argument of
e.g. crew_controller_sge()
.The risks of crew.cluster
are the same as those of
crew
, plus the risks of
traditional high-performance computing environments. These distributed
systems typically operate inside a firewall and trust the local network.
It is your responsibility to assess the security of these systems and
use crew.cluster
in a safe manner. In addition, crew.cluster
automatically launches jobs on the cluster scheduler, and it may not
always be able to terminate leftover jobs. It is your responsibility to
monitor your jobs and manually terminate jobs that crew.cluster
may
not be able to.
mirai
and
nanonext
and graciously
accommodated the complicated and demanding feature requests that made
crew
and its ecosystem possible.clustermq
under the
permissive Apache License
2.0.
These scripts helped construct launcher plugins to clusters where
direct access was not possible. See the
LICENSE.note
file in this package.Please note that the crew
project is released with a Contributor Code
of
Conduct.
By contributing to this project, you agree to abide by its terms.
citation("crew.cluster")