In this workshop, attendees will have the opportunity to see how common deep learning tasks in PyTorch can be easily parallelized using Dask clusters on Saturn Cloud.
After this workshop you will know:
To get the full learning value from this workshop, attendees should have prior experience with PyTorch. Experience with parallel computing is not needed.
If you are going to work through all the exercises, please use the steps below. If you'd like to just read along and not run the code, you can use the notebook_output folder above to see all the notebooks with the code already run.
saturncloud/saturn-gpu:2020.11.30
(Or most recent date suffix available)/srv/conda/envs/saturn/bin/pip install graphviz dask-pytorch-ddp plotnine tensorboardX
DASK_DISTRIBUTED__WORKER__DAEMON=False
V100-2XLarge - 8 cores - 61 GB RAM - 1 GPU
Medium
V100-2XLarge - 8 cores - 61 GB RAM - 1 GPU
git clone https://github.com/saturncloud/workshop-dask-pytorch.git /tmp/workshop-dask-pytorch
cp -r /tmp/workshop-dask-pytorch /home/jovyan/project
The project from the Saturn UI should look similar to this:
Your JupyterLab environment should look like this: