Packer and related scripts for configuring AWS GPU machines with nvidia driver and docker. Users of the AMI should then use the nvidia or Tensorflow docker files which handle installing CUDA, etc.
ami-ef7b9a92 (wyss-mlpe-docker-gpu-2018-02-21T21-06-10Z)
Results for testing ami-ef7b9a92:
c5.xlarge PASS
m4.2xlarge PASS
g3.4xlarge PASS
p3.2xlarge PASS
p3.8xlarge PASS
Use packer to create an AMI to run nvidia-docker
containers like tensorflow:1.4.1-gpu-py3
The packer
config gpu-packer.json
creates an AMI backed by an Amazon EBS volume on a gp2
SSD drive using the Ubuntu 16.04 AMI ami-d15a75c7
as a base.
The config then tells packer to setup the following (2018.02.03):
nvidia-docker
v2systemd
service 00: optimizes driver settings for in:
systemd
service 01: bash configurationawscli
virtualenv
and ipython
vitualenvwrapper
home directory in ~/vw_venvs
All bash scripts in repo are and should remain well commented to document the build process.
Download packer https://www.packer.io/downloads.html and unzip.
Optional: move the binary into /usr/local/bin
.
install python3 libraries in requirements.txt
to run tests in the
repositories test/
path
Validate the template.
$ packer validate gpu-packer.json
Build. You'll need your AWS keys.
$ packer build \
-var 'aws_access_key=YOUR ACCESS KEY' \
-var 'aws_secret_key=YOUR SECRET KEY' \
gpu-packer.json
Development is generously supported by AWS Cloud Credits for Research.
Thank you to 4Catalyzer for sharing early versions of these scripts.