Closed afkrause closed 1 year ago
The following dockerfile should work:
FROM nvcr.io/nvidia/pytorch:22.08-py3
ARG UID=1000
ARG UNAME=testuser
ARG WANDB_API_KEY
RUN useradd -ms /bin/bash -u $UID $UNAME && \
mkdir -p /home/${UNAME} &&\
chown -R $UID /home/${UNAME}
WORKDIR /home/${UNAME}
ENV DEBIAN_FRONTEND="noninteractive"
ENV WANDB_API_KEY=$WANDB_API_KEY
ENV TORCH_HOME=/home/${UNAME}/.cache
# OPTIONAL - DeepPrivacy2 uses these environment variables to set directories outside the current working directory
#ENV BASE_DATASET_DIR=/work/haakohu/datasets
#ENV BASE_OUTPUT_DIR=/work/haakohu/outputs
#ENV FBA_METRICS_CACHE=/work/haakohu/metrics_cache
RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 qt5-default -y
RUN pip install git+https://github.com/facebookresearch/detectron2@96c752ce821a3340e27edd51c28a00665dd32a30#subdirectory=projects/DensePose
COPY setup.py setup.py
RUN pip install \
numpy>=1.20 \
matplotlib \
cython \
tensorboard \
tqdm \
ninja==1.10.2 \
opencv-python==4.5.5.64 \
moviepy \
pyspng \
git+https://github.com/hukkelas/DSFD-Pytorch-Inference \
wandb \
termcolor \
git+https://github.com/hukkelas/torch_ops.git \
git+https://github.com/wmuron/motpy@c77f85d27e371c0a298e9a88ca99292d9b9cbe6b \
fast_pytorch_kmeans \
einops_exts \
einops \
regex \
setuptools==59.5.0 \
resize_right==0.0.2 \
pillow \
scipy==1.7.1 \
webdataset==0.2.26 \
scikit-image \
git+https://github.com/facebookresearch/detectron2@96c752ce821a3340e27edd51c28a00665dd32a30#subdirectory=projects/DensePose
RUN pip install --no-deps torch_fidelity==0.3.0
You can build it with:
docker build -t haakohu/fba_new --build-arg WANDB_API_KEY=YOUR_WANDB_KEY \
--build-arg UID=$(id -u) --build-arg UNAME=$(id -un) .
If you're not planning to train the network (or not use wandb logging), you can remove the WANDB_API_KEY argument.
Let me know if you get it working or not! :)
Thanks alot! That Dockerfile works! (additionally, i just needed to add gpu support by installing nvidia-docker)
Good! Closing the this now, but please open a new issue if you need any more help :)
Hello @hukkelas and @afkrause, I'm also having trouble at the setup. So I tried this Docker solution on my W11 using wsl2 and got the following error:
(base) pkb@NB623-PKB:~/Dp2_docker$ docker build -t haakohu/fba_new --build-arg WANDB_API_KEY=YOUR_WANDB_KEY --build-arg UID=$(id -u) --build-arg UNAME=$(id -un) .
[+] Building 45.7s (10/12)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.64kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for nvcr.io/nvidia/pytorch:22.08-py3 45.5s
=> CANCELED [1/8] FROM nvcr.io/nvidia/pytorch:22.08-py3@sha256:1aa83e1a13f756f31dabf82bc5a3c4f30ba423847cb230ce8c515f3add88b262 0.1s
=> => resolve nvcr.io/nvidia/pytorch:22.08-py3@sha256:1aa83e1a13f756f31dabf82bc5a3c4f30ba423847cb230ce8c515f3add88b262 0.0s
=> => sha256:1aa83e1a13f756f31dabf82bc5a3c4f30ba423847cb230ce8c515f3add88b262 745B / 745B 0.0s
=> => sha256:7d14bc3d5fe8e2b50d82cf0a8ba7d3496d541da0a542afc93f23711e4a4a9077 10.43kB / 10.43kB 0.0s
=> => sha256:b3d16c03921732eb9ce48470344df495890c81b442b15f127c5df72216c220bb 43.91kB / 43.91kB 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 2B 0.0s
=> CACHED [2/8] RUN useradd -ms /bin/bash -u 1000 pkb && mkdir -p /home/pkb && chown -R 1000 /home/pkb 0.0s
=> CACHED [3/8] WORKDIR /home/pkb 0.0s
=> CACHED [4/8] RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 qt5-default -y 0.0s
=> CACHED [5/8] RUN pip install git+https://github.com/facebookresearch/detectron2@96c752ce821a3340e27edd51c28a00665dd32a30#subdirectory=projects/DensePose 0.0s
=> ERROR [6/8] COPY setup.py setup.py 0.0s
------
> [6/8] COPY setup.py setup.py:
------
failed to compute cache key: "/setup.py" not found: not found
@PedroKBrant you can just remove the line "COPY setup.py setup.py" in the Dockerfile, as it is no longer used.
Thanks @hukkelas , I managed to create the docker image. I just need to install the NVIDIA driver and all should work fine.
Dear Håkon and Frank,
i am having issues setting up deep_privacy2.
With my current setup, i cannot compile detectron2. my system: Linux Mint 21, Python 3.10, using a virtualenv with pip3 ver. 22.3.1 g++ version: 11.3.0.
Error: nvcc gives the following error:
box_iou_rotated_cuda.cu -o /tmp/pip-install-ivei5q55/detectron2_6d6f876c8cab4856b7bf43bd6663c37d/build/temp.linux-x86_64-3.10/tmp/pip-install-ivei5q55/detectron2_6d6f876c8cab4856b7bf43bd6663c37d/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14 /usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’: 435 | function(_Functor&& __f)
do you have an idea how i can fix the issue; or could you maybe provide a Dockerfile for deep_privacy2? any help is highly appreciated!