facebookresearch / detectron2

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
https://detectron2.readthedocs.io/en/latest/
Apache License 2.0
30.55k stars 7.49k forks source link

how to use dockerfile #56

Closed niuwenju closed 5 years ago

niuwenju commented 5 years ago

there has some problems when i use dockerfile to run it

AssertionError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from

but i have installed a driver,and the version is 430.26

sinjax commented 5 years ago

Yeah, this one caught me out as well. The current dockerfile reads:

ENV FORCE_CUDA="1"
RUN pip install -e /detectron2_repo

The issue being that the detectron2/setup.py requires cuda to be there and as far as I can see there isn't a way to have access to cuda during the build. So instead you have to build the docker up to that point, bring up the docker, perform the final step, and then from outside the docker commit.

Step 1 Dockerfile:

FROM nvidia/cuda:10.1-cudnn7-devel
# To use this Dockerfile:
# 1. `nvidia-docker build -t detectron2:v0 .`
# 2. `nvidia-docker run -it --name detectron2 detectron2:v0`

ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
        libpng-dev libjpeg-dev python3-opencv ca-certificates \
        python3-dev build-essential pkg-config git curl wget automake libtool && \
  rm -rf /var/lib/apt/lists/*

RUN curl -fSsL -O https://bootstrap.pypa.io/get-pip.py && \
        python3 get-pip.py && \
        rm get-pip.py

# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
RUN pip install torch torchvision cython \
        'git+https://github.com/facebookresearch/fvcore'
RUN pip install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

# install detectron2
RUN git clone https://github.com/facebookresearch/detectron2 /detectron2_repo
ENV FORCE_CUDA="1"

docker build . -f Dockerfile.partial -t detectron2-partial Step 2

docker run -it detectron2-partial pip install -e /detectron2_repo

Step 3 Wait for that to finish and do a docker ps and look for the detectron2-partial instance where your pip install is happening. It will be some hash like: b1ab0d1e909b, you can then do:

docker commit b1ab0d1e909b detectron2

and you should have a docker image which has detectron2 installed.

but honestly, this isn't very nice. I'm sure there is a proper way to compile things that link against cuda as part of a docker build but I'm not sure what they are :)

ppwwyyxx commented 5 years ago

with #61 the build command nvidia-docker build -t detectron2:v0 . works for me.

sinjax commented 5 years ago

Thanks for the fix @ppwwyyxx It now successfully compiles but now I get a similar bug to what would happen if I used my approach above to compile on a v100 (for example) and tried to run on m laptop's GTX 1050 or one of our local servers which have GTX 1080Ti. Namely:

> docker run --net=host --runtime=nvidia -u $(id -u):$(id -g) ... /detectron2/base:0.1 python3 /detectron2_repo/demo/demo.py --config-file /detectron2_repo/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --input ~/London1_input/frame_0003.jpg --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"process_linux.go:407: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --compute --utility --require=cuda>=10.1 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=396,driver<397 brand=tesla,driver>=410,driver<411 --pid=53375 /var/lib/docker/overlay2/d74c2d66316c7b908576f4a954eafbc37be6ab3efa52c255ccc1758f7a5d4a36/merged]\\\\nnvidia-container-cli: requirement error: unsatisfied condition: brand = tesla\\\\n\\\"\"": unknown.

This error does not appear if I create the docker before the final step and build detectron on each gpu in turn. Things I have tried include:

Given that detectron does work on the 1080ti if I compile on it directly I am left to believe that there is some configuration of TORCH_CUDA_ARCH_LIST which would make it work; but I've yet to find it

RuiLiu0129 commented 3 years ago

Yeah, this one caught me out as well. The current dockerfile reads:

ENV FORCE_CUDA="1"
RUN pip install -e /detectron2_repo

The issue being that the detectron2/setup.py requires cuda to be there and as far as I can see there isn't a way to have access to cuda during the build. So instead you have to build the docker up to that point, bring up the docker, perform the final step, and then from outside the docker commit.

Step 1 Dockerfile:

FROM nvidia/cuda:10.1-cudnn7-devel
# To use this Dockerfile:
# 1. `nvidia-docker build -t detectron2:v0 .`
# 2. `nvidia-docker run -it --name detectron2 detectron2:v0`

ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
        libpng-dev libjpeg-dev python3-opencv ca-certificates \
        python3-dev build-essential pkg-config git curl wget automake libtool && \
  rm -rf /var/lib/apt/lists/*

RUN curl -fSsL -O https://bootstrap.pypa.io/get-pip.py && \
        python3 get-pip.py && \
        rm get-pip.py

# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
RUN pip install torch torchvision cython \
        'git+https://github.com/facebookresearch/fvcore'
RUN pip install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

# install detectron2
RUN git clone https://github.com/facebookresearch/detectron2 /detectron2_repo
ENV FORCE_CUDA="1"

docker build . -f Dockerfile.partial -t detectron2-partial Step 2

docker run -it detectron2-partial pip install -e /detectron2_repo

Step 3 Wait for that to finish and do a docker ps and look for the detectron2-partial instance where your pip install is happening. It will be some hash like: b1ab0d1e909b, you can then do:

docker commit b1ab0d1e909b detectron2

and you should have a docker image which has detectron2 installed.

but honestly, this isn't very nice. I'm sure there is a proper way to compile things that link against cuda as part of a docker build but I'm not sure what they are :)

Solved my problem using this method! extremely helpful!

Ubuntu 18.04 and Cuda 10.1