ledatelescope / bifrost

A stream processing framework for high-throughput applications.
BSD 3-Clause "New" or "Revised" License
66 stars 29 forks source link

Refactor Dockerfiles to use common base layer #92

Open benbarsdell opened 7 years ago

benbarsdell commented 7 years ago
MilesCranmer commented 7 years ago

The specification of base images from the command line is very nice. I will have to read how travis tests different languages, and if there is a similar settings where we can set different environment variables to run over.

MilesCranmer commented 7 years ago

Sorry that my review has a big red "X" on the top of it, I wasn't sure what the difference between the options were for suggesting changes. This PR is great.

benbarsdell commented 7 years ago

Thanks Miles, fantastic review! Lots of excellent points.

Re the test failures, what CUDA version and GPU model is this running with?

MilesCranmer commented 7 years ago

CUDA 8.0 on a Tesla K80 (it's an AWS p2.xlarge), default build settings (git checkout <this-pull-request> then make docker)

MilesCranmer commented 7 years ago

I set up docker hub on my fork. You can pull with, e.g.,

docker pull mcranmer/bifrost:gpu-base

The default (:latest) is the cpu docker.

Here are all the images: https://hub.docker.com/r/mcranmer/bifrost/tags/ Each one is off a different branch, e.g., https://github.com/MilesCranmer/bifrost/tree/docker-gpu We could get travis to send a POST request to the hub every time a new commit passes all tests to build every image again.

MilesCranmer commented 7 years ago

We could also pull from docker library images that eliminate redundancy in our builds, e.g., python:2.7-slim (which comes with pip/setuptools, and basic build dependencies)

https://github.com/docker-library/python/blob/1ca4a57b20a2f66328e5ef72df866f701c0cd306/2.7/slim/Dockerfile

MilesCranmer commented 7 years ago

Actually, I forgot that that won't work, as they need the nvidia/cuda base instead of debian:jessie.

Do you know of any NVIDIA docker library which attempts to re-create some of the official docker-library using an nvidia/cuda base? It looks like quite a few of the official images are built off of debian variants, which means it wouldn't break anything to use the base image as nvidia/cuda but otherwise leave the dockerfile identical.

I made one for buildpack-deps:xenial - https://github.com/MilesCranmer/docker-cuda-buildpack, but it is obviously not official.

MilesCranmer commented 7 years ago

ledatelescope now has a Docker hub: https://hub.docker.com/r/ledatelescope/bifrost/

The only image is :gpu-base, based off of the single-commit (orphaned) gpu-base branch.

MilesCranmer commented 7 years ago

New image built on top of pypy is :gpu-base-pypy, off the gpu-base-pypy branch

MilesCranmer commented 7 years ago

:gpu-pypy exists now as well. I have it delete the entirety of the Bifrost source to save space with the module still existing as an import:

nvidia-docker run -it --rm ledatelescope/bifrost:gpu-pypy

python is symbolically to pypy so the default make install works without a modification of the Makefiles.