Closed salahzoubi closed 3 months ago
Hello,
is there any chance you can share the Dockerfile that you're using for
docker pull shaoguo/faster_liveportrait:v2
? Particularly, I'm looking to upgrade the cuda version to 11.8+ (so it can be run on H100's) and I'm having a lot of trouble doing this from the docker (i.e. re-installing cuda-toolkit 11.8 and redoing nvcc). I'm also wondering the steps you're using to download and get tensorrt to run cause it's an extreme pain to get it working from scratch...Thank you!
I'm afraid there's no such Dockerfile. It's actually based on an nvidia/cuda image, installing step by step in the image according to the readme instructions, then committing it as a new image. I believe you can do this too.
@warmshao thanks for all your replies so far, you've been super helpful!
Is there a quick and easy way to get cuda 11.8 or 12.1 loaded as the image through this docker? The dependencies, especially tensorrt seem to break if I try doing this manually...
Much appreciated!
This is a Dockerfile I'm using now.
UniPose build needs gpu so it can only be built in docker.
FROM nvcr.io/nvidia/tensorrt:24.04-py3
COPY requirements.txt /opt/requirements.txt
RUN pip install --no-cache-dir -r /opt/requirements.txt
RUN pip install --no-cache-dir onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
RUN git clone https://github.com/SeanWangJS/grid-sample3d-trt-plugin /opt/grid-sample3d-trt-plugin && \
cd /opt/grid-sample3d-trt-plugin && \
sed -i 's/"89"/"60;70;75;80;86;89"/g' CMakeLists.txt && \
export PATH=/usr/local/cuda/bin:$PATH && \
mkdir build && cd build && \
cmake .. -DTensorRT_ROOT=/opt/tensorrt && \
make
RUN mkdir -p /opt/ffmpeg
RUN cd /opt/ffmpeg \
&& git clone -q -b sdk/11.0 https://git.videolan.org/git/ffmpeg/nv-codec-headers.git \
&& cd nv-codec-headers && \
make install
RUN apt update && \
apt install build-essential \
pkg-config \
yasm \
cmake \
libtool \
libc6 \
libc6-dev \
unzip \
wget \
libnuma1 \
libnuma-dev \
libx264-dev \
libwebp-dev \
libmp3lame-dev \
libffmpeg-nvenc-dev -y \
&& rm -rf /var/lib/apt/lists/*
RUN cd /opt/ffmpeg \
&& git clone -q -b release/6.1 https://git.ffmpeg.org/ffmpeg.git ffmpeg/ && \
cd ffmpeg && \
./configure --enable-nonfree \
--enable-cuda-nvcc \
--enable-nvenc \
--enable-libnpp \
--extra-cflags=-I/usr/local/cuda/include \
--extra-ldflags=-L/usr/local/cuda/lib64 \
--disable-static \
--enable-shared \
--enable-gpl \
--enable-libwebp \
--enable-libmp3lame \
--enable-libx264 && \
make -j 8 && make install && rm -rf /opt/ffmpeg
RUN pip install --no-cache-dir torch torchvision cupy-cuda12x
RUN apt update && apt install libgl1 -y && rm -rf /var/lib/apt/lists/*
WORKDIR /root/FasterLivePortrait
#RUN cd /root/FasterLivePortrait/src/models/XPose/models/UniPose/ops && python setup.py build install
If you need tensorrt 10, you can try
nvcr.io/nvidia/tensorrt:24.07-py3
but tensorrt 10 needs some change to the onnx2trt.py
Check this for refenence https://github.com/aihacker111/Efficient-Live-Portrait/blob/main/experiment_examples/portrait2onnx/export_tensorrt.py
https://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/index.html
@yuananf does the tensorrt container take care of nvidia cuda toolkit in this case? If so does it install 12.1+?
You can find the corresponding tensorrt/cuda/cudnn version here.
https://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/index.html
Hello,
is there any chance you can share the Dockerfile that you're using for
docker pull shaoguo/faster_liveportrait:v2
? Particularly, I'm looking to upgrade the cuda version to 11.8+ (so it can be run on H100's) and I'm having a lot of trouble doing this from the docker (i.e. re-installing cuda-toolkit 11.8 and redoing nvcc). I'm also wondering the steps you're using to download and get tensorrt to run cause it's an extreme pain to get it working from scratch...Thank you!