rapidsai / cuml

cuML - RAPIDS Machine Learning Library
https://docs.rapids.ai/api/cuml/stable/
Apache License 2.0
4.26k stars 535 forks source link

[QST] RAPIDS + Optuna on multi-GPU #5274

Open ptynecki opened 1 year ago

ptynecki commented 1 year ago

Hello,

I would like to ask you for the tip to setup and launch the Optuna execution on RAPIDS environment for hyper-parameter optimization using multi-GPU.

Currently, I am using OptunaSearchCV class to support customized cross-validation inside of hyper-parameter optimization for SVM, RF and XGBoost. I have a workstation with 2 GPU NVIDIA A6000 connected by NVLink.

The example code which I am using is here:

optuna_search = optuna.integration.OptunaSearchCV(
    estimator=model,
    param_distributions=params,
    cv=cv_10,
    scoring=SCORING,
    verbose=0,
    n_jobs=1,
    n_trials=N_TRIALS,
    random_state=RANDOM_STATE
)

That code allows me to execute the process on single GPU in RAPIDS (via conda).

I am wondering if I can support optimization calculation using both GPU at the same time (to speed-up the study). If not, let me know what alternatives I have (e.g. RAPIDS, Dask, and Kubernetes).

beckernick commented 1 year ago

It is probably possible to do this using Dask to orchestrate the two GPUs. We have an Optuna HPO example in the RAPIDS Docs Deployment Guide, though it's using a difference interface. You may be able to adapt it for your use case, though.

cc @mmccarty @jacobtomlinson , in case you have any recommendations.

jacobtomlinson commented 1 year ago

Thanks for the ping @beckernick. Those are the same links I would've shared.