Closed jacobtomlinson closed 6 months ago
UPDATES:
CC @jacobtomlinson
I was able to successfully launch a dataproc cluster with rapids v23.12 stable, cuda 11.8 (both parsed via the metadata flags --rapids-version
and --cuda-version
)
The install_gpu_driver.sh script downloads ubuntu 18.04, and has outdated versions of cuda and gpu drivers ; it installs cuda11.2 by default doesn't include cuda 12 as an option
The latest rapids 24.02 is only compatible with ubuntu 20.04 and 22.04; Thus the install scripts need to be updated accordingly with the newer drivers as well
To test RAPIDS libraries in the notebook environment, we need to edit rapids.sh script to activate the conda environment (dask-rapids
) and register it as a kernel in Jupyter Lab/ Notebook
For now the users will have to manually conda activate
and register the dask-rapids
kernel from the terminal in Jupyter.
Alternatively, users can use the dataproc:conda.env.config.uri
, which is absolute path to a Conda environment YAML config file located in Cloud Storage. This file will be used to create and activate a new Conda environment on the cluster. But this option is redundant because you first have to export the conda env into a .yaml file
Refer to #1137 -- tracking this issue on the Google Cloud Dataproc Github as well
Following the latest Dataproc docs works, but doesn't give the best experience. We could improve the docs in a few ways.
Once things are set up if you use Jupyter Lab RAPIDS is not installed in the base environment. Also the RAPIDS environment isn't registered as a kernel that you can select in the notebook.
We should show how to connect to the Dask cluster in a notebook and run some work.