Closed sharingan000 closed 5 months ago
Hello,
I can provide you an example that was used for x86_64, Linux Ubuntu 20.04, CUDA 11.7, NVIDIA A100, that you will need to adapt to your setup.
cudnn-linux-x86_64-8.5.0.96_cuda11-archive.tar.xz
from NVIDIA and place it at the root of the repository. docker_build.sh
and docker_run.sh
using the following Dockerfile:
FROM nvidia/cuda:11.7.0-cudnn8-runtime-ubuntu20.04 AS base
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ apt-get -y update && DEBIAN_FRONTEND=noninteractive apt-get install -y \ --no-install-recommends tzdata \ procps \ zlib1g \ libglib2.0-0 libgl1-mesa-glx libsm6 libxext6 \ curl build-essential git libssl-dev wget unzip \ libxrender-dev libcairo2-dev \ python3.9 python3.9-dev pip
COPY cudnn-linux-x86_64-8.5.0.96_cuda11-archive.tar.xz . RUN tar -xvf cudnn-linux-x86_64-8.5.0.96_cuda11-archive.tar.xz && \ cp cudnn--archive/include/cudnn.h /usr/local/cuda/include && \ cp -P cudnn--archive/lib/libcudnn /usr/local/cuda/lib64 && \ chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64 RUN ln -s /usr/local/cuda/lib64/libcublas.so.11 /usr/local/cuda/lib64/libcublas.so
ENV PATH="${PATH}:/app"
FROM base AS dependency
RUN useradd -ms /bin/bash app \ && mkdir /app \ && chown -R app:0 /app \ && chmod g=u -R /app
WORKDIR /app
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ python3.9 -m pip install --no-warn-script-location torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117 \ && python3.9 -m pip install --no-warn-script-location pytorch_lightning==2.0.2 torch-geometric==2.3.1 \ scikit-learn seaborn timm mahotas more_itertools
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ python3.9 -m pip install --no-warn-script-location rdkit-pypi CairoSVG SmilesPE python-Levenshtein \ nltk ipykernel ipython rouge-score opencv-python \ albumentations \ paddleocr paddlepaddle-gpu==2.4.2.post117 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html \ torchsummary weighted-levenshtein
FROM dependency AS specific
WORKDIR /app
COPY --chown=app:0 . /app/MolGrapher/
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ pip install -e /app/MolGrapher/ \ && pip install -e /app/MolGrapher/MolDepictor/
FROM specific AS production
WORKDIR /app
RUN wget https://huggingface.co/ds4sd/MolGrapher/resolve/main/models/graph_classifier/gc_gcn_model.ckpt -O /app/MolGrapher/data/models/graph_classifier/gc_gcn_model.ckpt RUN wget https://huggingface.co/ds4sd/MolGrapher/resolve/main/models/graph_classifier/gc_no_stereo_model.ckpt -O /app/MolGrapher/data/models/graph_classifier/gc_no_stereo_model.ckpt RUN wget https://huggingface.co/ds4sd/MolGrapher/resolve/main/models/graph_classifier/gc_stereo_model.ckpt -O /app/MolGrapher/data/models/graph_classifier/gc_stereo_model.ckpt RUN wget https://huggingface.co/ds4sd/MolGrapher/resolve/main/models/keypoint_detector/kd_model.ckpt -O /app/MolGrapher/data/models/keypoint_detector/kd_model.ckpt
RUN chmod ugo+rwx -R /app WORKDIR /app/MolGrapher/molgrapher/scripts/annotate/
CMD ["/bin/sh", "./run.sh"]
I hope this can help,
Best,
Lucas
Hi, dear developers! Thank you very much for code sharing!
Could you help me by making Docker file for GPU run