-
I am trying to use mlr with batchtools to conduct benchmarking. In order to generate a prediction on the benchmarked results from batchtools, I need to retrieve the benchmark results. Using reduceResu…
-
Hi Henrik,
maybe that's utterly basic and the wrong place to ask, but I thought it might be of interest for more people:
Is there a way you can retrieve a future expressions in the following set…
-
If one runs hyper-parameter optimization with mlrMBO on a cluster and due to the resources limit(memory limit or time limit for example) the scheduling system has to kill the process. In this case, is…
-
Hello,
When I call scancel to cancel all tasks submitted to a slurm cluster by future.batchtools then hit CTRL-C to get the terminal back, R displays an error per task and is painfully slow to come b…
-
Docker image for testing: https://hub.docker.com/r/agaveapi/htcondor/
mllg updated
7 months ago
-
For schedulers like SLURM,[ preemptible jobs](https://slurm.schedmd.com/preempt.html) are really nice. Many University computing resources have a lot of spare computing power for preemptible jobs and…
-
Hello,
I am currently running a parallel jobs (with bplapply) on an LSF cluster using BatchToolsParam and I found an issue where there are no logs produced in runs that have a few jobs failing.
…
-
# Background
Being able to *relaunch* a future, that is re-evaluate a future expression that has already been evaluated in full or partially due to a failure, is useful when for instance the communic…
-
+ Bridging python packages (`dask-geopandas`, `geopolars`, `cuSpatial`) to R workflow
+ Optimal splitting of computational regions regarding shape complexity and input data resolutions
+ Streamlined…
-
is there an analogous setting in future::plan for rmpi?
```r
library(parallel)
hello_world