Closed Fan-Bob closed 2 months ago
感谢反馈!后续会考虑的
Debian is best
linux 下dockerfile如下,未做测试
FROM nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu20.04
ENV DEBIAN_FRONTEND=noninteractive ENV TZ=Asia/Shanghai # Replace with your desired timezone
WORKDIR /app
RUN apt-get update && apt-get install -y \ python3 \ python3-pip \ ffmpeg \ git \ wget \ tzdata \ && apt-get clean \ && rm -rf /var/lib/apt/lists/*
ARG REGION ENV REGION=${REGION} RUN if [ "$REGION" = "Asia" ]; then \ pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple ; \ else \ pip3 config set global.index-url https://pypi.org/simple ; \ fi
RUN pip3 install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121
RUN git clone https://github.com/Chenyme/Chenyme-AAVT.git
RUN mkdir -p /app/Chenyme-AAVT/model/whisper-large-v3 \ && cd /app/Chenyme-AAVT/model/whisper-large-v3 \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/README.md \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/config.json \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/model.bin \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/preprocessor_config.json \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/tokenizer.json \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/vocabulary.json
WORKDIR /app/Chenyme-AAVT
RUN pip3 install streamlit
CMD ["bash", "-c", "python3 project/font_data.py && streamlit run Chenyme-AAVT.py"]
Debian is best
linux 下dockerfile如下,未做测试
Base image with CUDA 12.1 support
FROM nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu20.04
Set environment variables to preconfigure tzdata
ENV DEBIAN_FRONTEND=noninteractive ENV TZ=Asia/Shanghai # Replace with your desired timezone
Set the working directory
WORKDIR /app
Install Python 3, pip, ffmpeg, tzdata, and other dependencies
RUN apt-get update && apt-get install -y python3 python3-pip ffmpeg git wget tzdata && apt-get clean && rm -rf /var/lib/apt/lists/*
Set the pip mirror based on the selected region
ARG REGION ENV REGION=${REGION} RUN if [ "$REGION" = "Asia" ]; then pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple ; else pip3 config set global.index-url https://pypi.org/simple ; fi
Install PyTorch, torchvision, and torchaudio with CUDA 12.1 support
RUN pip3 install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121
Clone the required GitHub repository
RUN git clone https://github.com/Chenyme/Chenyme-AAVT.git
Create the model/whisper-large-v3 directory and download required files
RUN mkdir -p /app/Chenyme-AAVT/model/whisper-large-v3 && cd /app/Chenyme-AAVT/model/whisper-large-v3 && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/README.md && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/config.json && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/model.bin && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/preprocessor_config.json && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/tokenizer.json && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/vocabulary.json
Set the working directory to the cloned repository
WORKDIR /app/Chenyme-AAVT
Install additional Python dependencies (add any needed packages here)
RUN pip3 install streamlit
Run font_data.py and then start the Streamlit app
CMD ["bash", "-c", "python3 project/font_data.py && streamlit run Chenyme-AAVT.py"]
感谢!
现已支持 Linux Docker 部署~
系统和容器的cuda需要一致吗
Debian is best