mlr-org / mlr

Machine Learning in R
https://mlr.mlr-org.com
Other
1.64k stars 405 forks source link

Diagnostic tools to benchmark results with OpenMP and without OpenMP? #1911

Closed heoa closed 7 years ago

heoa commented 7 years ago

I would like to test OpenMP with MLR. For example, xgboost supports OpenMP if OpenMP enabled [1]. I would like to benchmark results with OpenMP and without OpenMP.

Does there exist some ways to benchmark the results from MLR with OpenMP or without OpenMP?

[1] https://github.com/dmlc/xgboost/blob/master/R-package/R/xgb.train.R

larskotthoff commented 7 years ago

What exactly do you want to benchmark?

heoa commented 7 years ago

@larskotthoff easiest would be computation time with different numbers of cores allocated.

larskotthoff commented 7 years ago

Time is available as a normal performance measure, see the tutorial.

heoa commented 7 years ago

@larskotthoff I see no mention about OpenMP. There is no way to explicitly turn it off/on.

larskotthoff commented 7 years ago

Yeah, if you want OpenMP support you need to compile the respective package with it (in this case xgboost). You may be able to pass the number of cores to use, but that's something the learner package needs to support.

berndbischl commented 7 years ago

@heoa please state exactly at what level you want to use openmp

berndbischl commented 7 years ago

do you want to parallelize the fitting of the model (like xgboost)? this is not (directly) supported in mlr at all, you need to enable that through a param of the model.

mlr allows parallelization of resampling, benchmarking, tuning, etc. and this is done through parallelMap (which again wraps packages parallel and batchtools)

giuseppec commented 7 years ago

Why don't you just use the parameter nthread = 1, e.g. makeLearner("classif.xgboost", nthread = 1) ? Shouldn't this already give you the possibility to "deactivate" OpenMP.

#' Parallelization is automatically enabled if \code{OpenMP} is present. 
#' Number of threads can also be manually specified via \code{nthread} parameter.

Reopen if this does not answer your question.