mlr-org / mlrMBO

Toolbox for Bayesian Optimization and Model-Based Optimization in R
https://mlrmbo.mlr-org.com
Other
187 stars 47 forks source link

proposePointsMOIMBO: Error in crossover(t(X[parents, , drop = FALSE])) : Argument 's_parents' is not a real matrix. #392

Closed abossenbroek closed 4 years ago

abossenbroek commented 7 years ago

I try to optimize the run the following optimization

lrn_test = makeFilterWrapper(learner = "regr.xgboost")
param_test = makeParamSet(
  makeDiscreteParam("fw.perc", values = 0.6, tunable = FALSE),
  makeDiscreteParam("fw.method", values = "information.gain", tunable = FALSE),
  makeDiscreteParam("nrounds", values = 100.0, tunable = FALSE),
  makeNumericParam("min_child_weight", lower = 1L, upper = 10L),
  makeNumericParam("max_depth", lower = 2, upper = 5, default = 2, trafo = function(x) round(2 * x)),
  makeNumericParam("subsample", lower = 0.5, upper = 1),
  makeNumericParam("colsample_bytree", lower = 0.4, upper = 1)
)

bh.task_no_factor = createDummyFeatures(bh.task)

require(mlrMBO)
mbo.ctrl = makeMBOControl(propose.points = 10,
                          impute.y.fun = function(x, y, opt.path, ...)
                            runif(1, min = 1, max = 3) * 1e2,
                          final.method = "best.predicted",
                          final.evals = 32)
mbo.ctrl = setMBOControlInfill(control = mbo.ctrl,
                               crit = crit.cb)
mbo.ctrl = setMBOControlMultiPoint(control = mbo.ctrl, method = "moimbo")
mbo.ctrl = setMBOControlTermination(mbo.ctrl, iters = 10)

surrogate.lrn = makeLearner("regr.randomForest", predict.type = "se")
surrogate.lrn = makeImputeWrapper(surrogate.lrn,
                                  classes = list(numeric = imputeConstant(1e3),
                                                 factor = imputeConstant("__miss__")))
ctrl = mlr:::makeTuneControlMBO(learner = surrogate.lrn, mbo.control = mbo.ctrl)

tuneParams(lrn_test, task = bh.task_no_factor, resampling = cv3,
           par.set = param_test,
           measures = mape,
           control = ctrl)

which results in the following output:

[Tune] Started tuning learner regr.xgboost.filtered for parameter set:
                     Type len Def           Constr Req Tunable Trafo
fw.perc          discrete   -   -              0.6   -   FALSE     -
fw.method        discrete   -   - information.gain   -   FALSE     -
nrounds          discrete   -   -              100   -   FALSE     -
min_child_weight  numeric   -   -          1 to 10   -    TRUE     -
max_depth         numeric   -   2           2 to 5   -    TRUE     Y
subsample         numeric   -   -         0.5 to 1   -    TRUE     -
colsample_bytree  numeric   -   -         0.4 to 1   -    TRUE     -
With control class: TuneControlMBO
Imputation value: Inf
[Tune-x] 1: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=9.32; max_depth=8; subsample=0.86; colsample_bytree=0.705
[Tune-y] 1: mape.test.mean=0.118; time: 0.0 min
[Tune-x] 2: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=4.41; max_depth=5; subsample=0.685; colsample_bytree=0.573
[Tune-y] 2: mape.test.mean=0.129; time: 0.0 min
[Tune-x] 3: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=5.76; max_depth=6; subsample=0.744; colsample_bytree=0.605
[Tune-y] 3: mape.test.mean=0.124; time: 0.0 min
[Tune-x] 4: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=5.84; max_depth=7; subsample=0.644; colsample_bytree=0.772
[Tune-y] 4: mape.test.mean=0.128; time: 0.0 min
[Tune-x] 5: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=5.25; max_depth=7; subsample=0.805; colsample_bytree=0.983
[Tune-y] 5: mape.test.mean=0.124; time: 0.0 min
[Tune-x] 6: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=3.1; max_depth=8; subsample=0.773; colsample_bytree=0.747
[Tune-y] 6: mape.test.mean=0.132; time: 0.0 min
[Tune-x] 7: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=1.46; max_depth=8; subsample=0.846; colsample_bytree=0.675
[Tune-y] 7: mape.test.mean=0.123; time: 0.1 min
[Tune-x] 8: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=6.89; max_depth=4; subsample=0.717; colsample_bytree=0.65
[Tune-y] 8: mape.test.mean=0.122; time: 0.0 min
[Tune-x] 9: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=8.77; max_depth=8; subsample=0.758; colsample_bytree=0.692
[Tune-y] 9: mape.test.mean=0.128; time: 0.0 min
[Tune-x] 10: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=6.66; max_depth=9; subsample=0.617; colsample_bytree=0.891
[Tune-y] 10: mape.test.mean=0.126; time: 0.1 min
[Tune-x] 11: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=3.29; max_depth=5; subsample=0.678; colsample_bytree=0.548
[Tune-y] 11: mape.test.mean=0.13; time: 0.0 min
[Tune-x] 12: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=2.83; max_depth=9; subsample=0.884; colsample_bytree=0.962
[Tune-y] 12: mape.test.mean=0.118; time: 0.1 min
[Tune-x] 13: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=7.99; max_depth=8; subsample=0.991; colsample_bytree=0.497
[Tune-y] 13: mape.test.mean=0.131; time: 0.0 min
[Tune-x] 14: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=8.43; max_depth=6; subsample=0.591; colsample_bytree=0.837
[Tune-y] 14: mape.test.mean=0.136; time: 0.0 min
[Tune-x] 15: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=8.17; max_depth=6; subsample=0.923; colsample_bytree=0.742
[Tune-y] 15: mape.test.mean=0.128; time: 0.0 min
[Tune-x] 16: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=2.24; max_depth=7; subsample=0.79; colsample_bytree=0.826
[Tune-y] 16: mape.test.mean=0.122; time: 0.0 min
[Tune-x] 17: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=9.41; max_depth=5; subsample=0.709; colsample_bytree=0.932
[Tune-y] 17: mape.test.mean=0.131; time: 0.0 min
[Tune-x] 18: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=4.18; max_depth=6; subsample=0.56; colsample_bytree=0.566
[Tune-y] 18: mape.test.mean=0.132; time: 0.0 min
[Tune-x] 19: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=1.25; max_depth=5; subsample=0.976; colsample_bytree=0.799
[Tune-y] 19: mape.test.mean=0.115; time: 0.0 min
[Tune-x] 20: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=6.17; max_depth=9; subsample=0.904; colsample_bytree=0.863
[Tune-y] 20: mape.test.mean=0.126; time: 0.1 min
[Tune-x] 21: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=9.74; max_depth=10; subsample=0.96; colsample_bytree=0.522
[Tune-y] 21: mape.test.mean=0.136; time: 0.1 min
[Tune-x] 22: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=4.78; max_depth=10; subsample=0.546; colsample_bytree=0.412
[Tune-y] 22: mape.test.mean=0.14; time: 0.1 min
[Tune-x] 23: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=2.38; max_depth=7; subsample=0.632; colsample_bytree=0.463
[Tune-y] 23: mape.test.mean=0.136; time: 0.0 min
[Tune-x] 24: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=3.89; max_depth=5; subsample=0.505; colsample_bytree=0.957
[Tune-y] 24: mape.test.mean=0.132; time: 0.0 min
[Tune-x] 25: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=1.88; max_depth=9; subsample=0.527; colsample_bytree=0.429
[Tune-y] 25: mape.test.mean=0.141; time: 0.1 min
[Tune-x] 26: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=7.55; max_depth=7; subsample=0.943; colsample_bytree=0.617
[Tune-y] 26: mape.test.mean=0.129; time: 0.0 min
[Tune-x] 27: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=4.86; max_depth=4; subsample=0.825; colsample_bytree=0.467
[Tune-y] 27: mape.test.mean=0.128; time: 0.0 min
[Tune-x] 28: fw.perc=0.6; fw.method=information.gain; nrounds=100; min_child_weight=7.43; max_depth=10; subsample=0.576; colsample_bytree=0.911
[Tune-y] 28: mape.test.mean=0.128; time: 0.1 min
Error in crossover(t(X[parents, , drop = FALSE])) : 
  Argument 's_parents' is not a real matrix.
In addition: Warning message:
In dist(X) : NAs introduced by coercion

Instead of giving the desired outcome. Any suggestions how I could resolve this?

My R session info:

R version 3.4.0 Patched (2017-05-15 r72680)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252    LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C                           LC_TIME=English_United States.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] emoa_0.5-0        mlrMBO_1.0.0      smoof_1.4         checkmate_1.8.2   BBmisc_1.11       mlr_2.11          ParamHelpers_1.10

loaded via a namespace (and not attached):
 [1] RWeka_0.4-33                 purrr_0.2.2                  splines_3.4.0                rJava_0.9-8                 
 [5] lattice_0.20-35              colorspace_1.3-2             htmltools_0.3.5              viridisLite_0.2.0           
 [9] FSelector_0.21               survival_2.41-3              plotly_4.6.0                 DBI_0.6-1                   
[13] xxxx_0.1.0.9000 entropy_1.2.1                xgboost_0.6-4                bit64_0.9-7                 
[17] plot3D_1.1                   RColorBrewer_1.1-2           lhs_0.14                     lambda.r_1.1.9              
[21] mco_1.0-15.1                 plyr_1.8.4                   stringr_1.2.0                munsell_0.4.3               
[25] gtable_0.2.0                 futile.logger_1.4.3          htmlwidgets_0.8              misc3d_0.8-4                
[29] RWekajars_3.9.1-3            parallelMap_1.3              parallel_3.4.0               Rcpp_0.12.10                
[33] scales_0.4.1                 backports_1.0.5              randomForestSRC_2.4.2        jsonlite_1.4                
[37] bit_1.1-12                   ggplot2_2.2.1                digest_0.6.12                stringi_1.1.5               
[41] dplyr_0.5.0                  grid_3.4.0                   tools_3.4.0                  magrittr_1.5                
[45] lazyeval_0.2.0               tibble_1.3.0                 randomForest_4.6-12          futile.options_1.0.0        
[49] tidyr_0.6.1                  Matrix_1.2-10                data.table_1.10.4            assertthat_0.2.0            
[53] httr_1.2.1                   R6_2.2.0                     compiler_3.4.0   
abossenbroek commented 7 years ago

I updated to the latest git version of mlrmbo (release 1.1.1) and the following code still runs with errors:

require(mlrMBO)
lrn_test = makeFilterWrapper(learner = "regr.xgboost")
param_test = makeParamSet(
  makeNumericParam("fw.perc", lower = 0.2, upper = 0.7),
  makeDiscreteParam("fw.method", values = "information.gain", tunable = FALSE),
  makeDiscreteParam("nrounds", values = 100.0, tunable = FALSE),
  makeNumericParam("min_child_weight", lower = 1L, upper = 10L),
  makeNumericParam("max_depth", lower = 2, upper = 5, default = 2, trafo = function(x) round(2 * x)),
  makeNumericParam("subsample", lower = 0.5, upper = 1),
  makeNumericParam("colsample_bytree", lower = 0.4, upper = 1)
)

bh.task_no_factor = createDummyFeatures(bh.task)

mbo.ctrl = makeMBOControl(propose.points = 10,
                          final.evals = 32)
mbo.ctrl = setMBOControlInfill(control = mbo.ctrl,
                               crit = cb)
mbo.ctrl = setMBOControlMultiPoint(control = mbo.ctrl, method = "moimbo", 
                                   moimbo.objective = "ei.dist",
                                   moimbo.dist = "nearest.neighbor",
                                   moimbo.maxit = 10L)
mbo.ctrl = setMBOControlTermination(mbo.ctrl, iters = 10)

ctrl = mlr:::makeTuneControlMBO(mbo.control = mbo.ctrl)

x = tuneParams(lrn_test, task = bh.task_no_factor, resampling = cv3,
               par.set = param_test,
               measures = mape,
               control = ctrl)

gives the following error,

[Tune] Started tuning learner regr.xgboost.filtered for parameter set:
                     Type len Def           Constr Req Tunable Trafo
fw.perc           numeric   -   -       0.2 to 0.7   -    TRUE     -
fw.method        discrete   -   - information.gain   -   FALSE     -
nrounds          discrete   -   -              100   -   FALSE     -
min_child_weight  numeric   -   -          1 to 10   -    TRUE     -
max_depth         numeric   -   2           2 to 5   -    TRUE     Y
subsample         numeric   -   -         0.5 to 1   -    TRUE     -
colsample_bytree  numeric   -   -         0.4 to 1   -    TRUE     -
With control class: TuneControlMBO
Imputation value: Inf
[Tune-x] 1: fw.perc=0.571; fw.method=information.gain; nrounds=100; min_child_weight=4.49; max_depth=7; subsample=0.649; colsample_bytree=0.695
[Tune-y] 1: mape.test.mean=0.128; time: 0.0 min
[Tune-x] 2: fw.perc=0.646; fw.method=information.gain; nrounds=100; min_child_weight=3.69; max_depth=7; subsample=0.741; colsample_bytree=0.766
[Tune-y] 2: mape.test.mean=0.129; time: 0.0 min
[Tune-x] 3: fw.perc=0.54; fw.method=information.gain; nrounds=100; min_child_weight=4.97; max_depth=7; subsample=0.885; colsample_bytree=0.821
[Tune-y] 3: mape.test.mean=0.122; time: 0.0 min
[Tune-x] 4: fw.perc=0.415; fw.method=information.gain; nrounds=100; min_child_weight=4.01; max_depth=6; subsample=0.908; colsample_bytree=0.485
[Tune-y] 4: mape.test.mean=0.135; time: 0.0 min
[Tune-x] 5: fw.perc=0.312; fw.method=information.gain; nrounds=100; min_child_weight=8.36; max_depth=6; subsample=0.851; colsample_bytree=0.555
[Tune-y] 5: mape.test.mean=0.131; time: 0.0 min
[Tune-x] 6: fw.perc=0.265; fw.method=information.gain; nrounds=100; min_child_weight=9.4; max_depth=8; subsample=0.572; colsample_bytree=0.98
[Tune-y] 6: mape.test.mean=0.134; time: 0.0 min
[Tune-x] 7: fw.perc=0.297; fw.method=information.gain; nrounds=100; min_child_weight=5.18; max_depth=9; subsample=0.916; colsample_bytree=0.532
[Tune-y] 7: mape.test.mean=0.135; time: 0.1 min
[Tune-x] 8: fw.perc=0.49; fw.method=information.gain; nrounds=100; min_child_weight=7.06; max_depth=8; subsample=0.597; colsample_bytree=0.86
[Tune-y] 8: mape.test.mean=0.13; time: 0.1 min
[Tune-x] 9: fw.perc=0.683; fw.method=information.gain; nrounds=100; min_child_weight=6.6; max_depth=5; subsample=0.764; colsample_bytree=0.897
[Tune-y] 9: mape.test.mean=0.124; time: 0.0 min
[Tune-x] 10: fw.perc=0.46; fw.method=information.gain; nrounds=100; min_child_weight=3.21; max_depth=4; subsample=0.933; colsample_bytree=0.574
[Tune-y] 10: mape.test.mean=0.123; time: 0.0 min
[Tune-x] 11: fw.perc=0.281; fw.method=information.gain; nrounds=100; min_child_weight=2.24; max_depth=6; subsample=0.697; colsample_bytree=0.514
[Tune-y] 11: mape.test.mean=0.137; time: 0.0 min
[Tune-x] 12: fw.perc=0.603; fw.method=information.gain; nrounds=100; min_child_weight=9.19; max_depth=8; subsample=0.663; colsample_bytree=0.647
[Tune-y] 12: mape.test.mean=0.132; time: 0.0 min
[Tune-x] 13: fw.perc=0.508; fw.method=information.gain; nrounds=100; min_child_weight=7.8; max_depth=5; subsample=0.637; colsample_bytree=0.492
[Tune-y] 13: mape.test.mean=0.128; time: 0.0 min
[Tune-x] 14: fw.perc=0.352; fw.method=information.gain; nrounds=100; min_child_weight=1.88; max_depth=4; subsample=0.618; colsample_bytree=0.613
[Tune-y] 14: mape.test.mean=0.127; time: 0.0 min
[Tune-x] 15: fw.perc=0.335; fw.method=information.gain; nrounds=100; min_child_weight=1.13; max_depth=9; subsample=0.715; colsample_bytree=0.83
[Tune-y] 15: mape.test.mean=0.126; time: 0.1 min
[Tune-x] 16: fw.perc=0.654; fw.method=information.gain; nrounds=100; min_child_weight=6.23; max_depth=9; subsample=0.781; colsample_bytree=0.881
[Tune-y] 16: mape.test.mean=0.123; time: 0.1 min
[Tune-x] 17: fw.perc=0.376; fw.method=information.gain; nrounds=100; min_child_weight=2.85; max_depth=8; subsample=0.997; colsample_bytree=0.678
[Tune-y] 17: mape.test.mean=0.128; time: 0.1 min
[Tune-x] 18: fw.perc=0.432; fw.method=information.gain; nrounds=100; min_child_weight=7.46; max_depth=7; subsample=0.965; colsample_bytree=0.436
[Tune-y] 18: mape.test.mean=0.137; time: 0.0 min
[Tune-x] 19: fw.perc=0.389; fw.method=information.gain; nrounds=100; min_child_weight=3.46; max_depth=10; subsample=0.508; colsample_bytree=0.627
[Tune-y] 19: mape.test.mean=0.134; time: 0.1 min
[Tune-x] 20: fw.perc=0.468; fw.method=information.gain; nrounds=100; min_child_weight=4.85; max_depth=6; subsample=0.684; colsample_bytree=0.799
[Tune-y] 20: mape.test.mean=0.13; time: 0.0 min
[Tune-x] 21: fw.perc=0.253; fw.method=information.gain; nrounds=100; min_child_weight=2.35; max_depth=8; subsample=0.871; colsample_bytree=0.933
[Tune-y] 21: mape.test.mean=0.124; time: 0.1 min
[Tune-x] 22: fw.perc=0.622; fw.method=information.gain; nrounds=100; min_child_weight=8.47; max_depth=4; subsample=0.522; colsample_bytree=0.462
[Tune-y] 22: mape.test.mean=0.134; time: 0.0 min
[Tune-x] 23: fw.perc=0.402; fw.method=information.gain; nrounds=100; min_child_weight=7.22; max_depth=5; subsample=0.569; colsample_bytree=0.938
[Tune-y] 23: mape.test.mean=0.128; time: 0.0 min
[Tune-x] 24: fw.perc=0.53; fw.method=information.gain; nrounds=100; min_child_weight=9.71; max_depth=10; subsample=0.787; colsample_bytree=0.962
[Tune-y] 24: mape.test.mean=0.13; time: 0.1 min
[Tune-x] 25: fw.perc=0.674; fw.method=information.gain; nrounds=100; min_child_weight=5.61; max_depth=9; subsample=0.819; colsample_bytree=0.415
[Tune-y] 25: mape.test.mean=0.136; time: 0.1 min
[Tune-x] 26: fw.perc=0.213; fw.method=information.gain; nrounds=100; min_child_weight=8.88; max_depth=5; subsample=0.954; colsample_bytree=0.736
[Tune-y] 26: mape.test.mean=0.138; time: 0.0 min
[Tune-x] 27: fw.perc=0.227; fw.method=information.gain; nrounds=100; min_child_weight=5.95; max_depth=10; subsample=0.547; colsample_bytree=0.761
[Tune-y] 27: mape.test.mean=0.145; time: 0.1 min
[Tune-x] 28: fw.perc=0.583; fw.method=information.gain; nrounds=100; min_child_weight=1.41; max_depth=6; subsample=0.826; colsample_bytree=0.713
[Tune-y] 28: mape.test.mean=0.122; time: 0.0 min
Error in crossover(t(X[parents, , drop = FALSE])) : 
  Argument 's_parents' is not a real matrix.
In addition: Warning message:
In dist(X) : NAs introduced by coercion

Any advise how to solve this?

berndbischl commented 7 years ago

makeDiscreteParam("fw.perc", values = 0.6, tunable = FALSE),

why are you setting tunable=FALSE? i guess this is not doing what you think it does. this is an internal markup for the learner. i think you want to fix the param to a value right? then just use setHyperPars for the learner

berndbischl commented 7 years ago

surrogate.lrn = makeLearner("regr.randomForest", predict.type = "se") surrogate.lrn = makeImputeWrapper(surrogate.lrn, classes = list(numeric = imputeConstant(1e3), factor = imputeConstant("miss")))

why are you not using the mbo default surrogate model here?

berndbischl commented 7 years ago

I updated to the latest git version of mlrmbo (release 1.1.1) and the following code runs fine,

so your potential problem is resolved? i will still answer the questions that follow now.

berndbischl commented 7 years ago

why is the learner called 160 times?

10 iters x 10 proposed.points.per.iter = 100

if you want more fine grained control about budget and termination, read setMBOControlTermination

berndbischl commented 7 years ago

why does the mape change despite the fact that the hyperparameters do not change? I assume the way the cross validation is taken but just want to check.

no. the CV splits are synchronized, and all the same. read ?TuneControl.

same.resampling.instance    [logical(1)]
Should the same resampling instance be used for all evaluations to reduce variance? Default is TRUE.

but xgboost is a stochastic algorithm. just consider "col.sampling".......

abossenbroek commented 7 years ago

@berndbischl , great input, thanks for your feedback. I tried to find the right spots in the documentation but sometimes it is a bit hard. The reason why I use a custom surrogate learner is that I also do SVM regression that sometimes yield NA.

@berndbischl : I noted that I had a typo in the code where I thought I solved the issue. Hence I removed my comment, simplified the steps as per your recommendation but still an error. Any advise would be greatly appreciated!

mb706 commented 4 years ago

I also ran into this:

obj.fun <- smoof::makeSingleObjectiveFunction(
  fn = function(x) checkmate::assertIntegerish(x$ypar),
  par.set = mlrCPO::pSS(xpar: integer[0, 10], ypar: integer[0, 10]),
  has.simple.signature = FALSE)

ctrl <- makeMBOControl(propose.points = 2)
ctrl <- setMBOControlMultiPoint(ctrl, method = "moimbo")
ctrl <- setMBOControlInfill(ctrl, makeMBOInfillCritEI())
mbo(obj.fun, control = ctrl)

gives the same error:

Computing y column(s) for design. Not provided.
[mbo] 0: xpar=6; ypar=2 : y = 2 : 0.0 secs : initdesign
[mbo] 0: xpar=3; ypar=4 : y = 4 : 0.0 secs : initdesign
[mbo] 0: xpar=7; ypar=10 : y = 10 : 0.0 secs : initdesign
[mbo] 0: xpar=0; ypar=9 : y = 9 : 0.0 secs : initdesign
[mbo] 0: xpar=5; ypar=6 : y = 6 : 0.0 secs : initdesign
[mbo] 0: xpar=8; ypar=0 : y = 0 : 0.0 secs : initdesign
[mbo] 0: xpar=10; ypar=7 : y = 7 : 0.0 secs : initdesign
[mbo] 0: xpar=1; ypar=3 : y = 3 : 0.0 secs : initdesign
Error in crossover(t(X[parents, , drop = FALSE])) : 
  Argument 's_parents' is not a real matrix.