Closed pie-leroy closed 7 months ago
Hi Pierre, I don't have much knowledge with docker. If you find a way to install it with docker it could be a great addition for future users.
Hello,
sorry for the delay, if it helps here are the general instructions to run your template :
1) create a "Dockerfile" :
```Dockerfile
FROM pytorch/pytorch:1.12.1-cuda11.3-cudnn8-devel
ENV DEBIAN_FRONTEND noninteractive
# https://github.com/NVIDIA/nvidia-docker/issues/1632#issuecomment-1112667716
RUN rm /etc/apt/sources.list.d/cuda.list
#RUN rm /etc/apt/sources.list.d/nvidia-ml.list
RUN apt-get update
RUN apt-get install -y python3-opencv ca-certificates python3-dev git wget sudo ninja-build
RUN ln -sv /usr/bin/python3 /usr/bin/python
# create a non-root user
ARG USER_ID=1000
#RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g sudo
#RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
#USER appuser
WORKDIR /home/root
ENV PATH="/home/root/.local/bin:${PATH}"
RUN wget https://bootstrap.pypa.io/pip/get-pip.py && \
python3 get-pip.py --user && \
rm get-pip.py
# install dependencies
# See https://pytorch.org/ for other options if you use a different version of CUDA
# RUN pip3 install --user tensorboard cmake # cmake from apt-get is too old
RUN pip3 install --user cmake # cmake from apt-get is too old
# RUN pip3 install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
RUN pip3 install opencv-python
RUN pip3 install --user 'git+https://github.com/facebookresearch/fvcore'
# install detectron2
RUN git clone -b v0.5 https://github.com/facebookresearch/detectron2 detectron2_repo
# set FORCE_CUDA because during `docker build` cuda is not accessible
ENV FORCE_CUDA="1"
# This will by default build detectron2 for all common cuda architectures and take a lot more time,
# because inside `docker build`, there is no way to tell which architecture will be used.
ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing"
ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}"
RUN pip3 install --user -e detectron2_repo
# Set a fixed model cache directory.
ENV FVCORE_CACHE="/tmp"
WORKDIR /home/appuser/detectron2_repo
2) Build the image :
'''
docker build --build-arg USER_ID=$UID -t detectron2:v0 .
'''
3) Start the container and access the batch
'''
docker run --gpus all -p 9976:9976 -it -v $PATH_WORKING_DIRECTORY/:/YOUR_DIRECTROY--name=detectron2_container detectron2:v0
'''
The idea here is to create a volume that will provide access to the python scripts for testing the model AND to the model weights
4) Start your script with detecton2 installed:
go inside container to the path of your script and then run "python scipt.py"
Thank you very much!
Hello,
thank you very much for your work. Considering the complexity of the installation of the detectron2 package, do you plan to integrate your solution with docker to facilitate the execution of your project?
If not and I manage to do it, I can send you the information if needed,
Thank you, Pierre