DEPRECATED please use https://github.com/softwareunderground/subsurface-ml-docker
A docker image fully loaded with Geo* & ML related packages.
make notebook
./workspace
folder, you should see the contents of your the parent directory mounted as a volume, which means read/write access from the container.~/Datasets
is also mounted in the container at /data
To change any of the mounted paths, or add more edit the Makefile
At the moment, the configuration has first class setup for keras, as that is where it started out.
A full Anaconda install is huge and we are adding to that with common ml and geo packages. To try and stop this getting too bloated we have stuck with a MiniConda base image, meaning we need to be exoplicit about what we add but we only get what we want.
Some attept has been made to have sections in the Dockerfile
in the hope that it's easier for people to customise to their needs.
The container currently holds:
Here is a list of packages that were not included initially, maybe these should be turned into issues! :)
There are some other things it would be nice to do too:
General installation instructions are on the Docker site, but we give some quick links here:
We are using Makefile
to simplify docker commands within make commands.
Build the container and start a Jupyter Notebook
$ make notebook
Build the container and start an iPython shell
$ make ipython
Build the container and start a bash
$ make bash
For GPU support install NVIDIA drivers (ideally latest) and nvidia-docker. Run using
$ make notebook GPU=0 # or [ipython, bash]
Switch keras between Theano and TensorFlow
$ make notebook BACKEND=theano
$ make notebook BACKEND=tensorflow
Mount a volume for external data sets
$ make DATA=~/mydata
Prints all make tasks
$ make help
You can change Theano parameters by editing /docker/theanorc
.
Note: If you would have a problem running nvidia-docker you may try the old way we have used. But it is not recommended. If you find a bug in the nvidia-docker report it there please and try using the nvidia-docker as described above.
$ export CUDA_SO=$(\ls /usr/lib/x86_64-linux-gnu/libcuda.* | xargs -I{} echo '-v {}:{}')
$ export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
$ docker run -it -p 8888:8888 $CUDA_SO $DEVICES gcr.io/tensorflow/tensorflow:latest-gpu
MIT
This docker and Makefile layout was originally based on the docker starter example in the keras repo. THe Docker file in particular has been customised to make it easier to see groups of related packages and add remove as necessary. But the makefile and instructions in this readme are pretty much as-is and lovely. The original repository available under MIT here