Chenyme / Chenyme-AAVT

这是一个全自动(音频)视频翻译项目。利用Whisper识别声音,AI大模型翻译字幕,最后合并字幕视频,生成翻译后的视频。
MIT License
1.76k stars 155 forks source link

【建议】Feature Docker部署Debian #36

Closed Fan-Bob closed 2 months ago

Fan-Bob commented 5 months ago

Debian is best

Chenyme commented 4 months ago

感谢反馈!后续会考虑的

dhlsam commented 3 months ago

Debian is best

linux 下dockerfile如下,未做测试

Base image with CUDA 12.1 support

FROM nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu20.04

Set environment variables to preconfigure tzdata

ENV DEBIAN_FRONTEND=noninteractive ENV TZ=Asia/Shanghai # Replace with your desired timezone

Set the working directory

WORKDIR /app

Install Python 3, pip, ffmpeg, tzdata, and other dependencies

RUN apt-get update && apt-get install -y \ python3 \ python3-pip \ ffmpeg \ git \ wget \ tzdata \ && apt-get clean \ && rm -rf /var/lib/apt/lists/*

Set the pip mirror based on the selected region

ARG REGION ENV REGION=${REGION} RUN if [ "$REGION" = "Asia" ]; then \ pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple ; \ else \ pip3 config set global.index-url https://pypi.org/simple ; \ fi

Install PyTorch, torchvision, and torchaudio with CUDA 12.1 support

RUN pip3 install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121

Clone the required GitHub repository

RUN git clone https://github.com/Chenyme/Chenyme-AAVT.git

Create the model/whisper-large-v3 directory and download required files

RUN mkdir -p /app/Chenyme-AAVT/model/whisper-large-v3 \ && cd /app/Chenyme-AAVT/model/whisper-large-v3 \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/README.md \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/config.json \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/model.bin \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/preprocessor_config.json \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/tokenizer.json \ && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/vocabulary.json

Set the working directory to the cloned repository

WORKDIR /app/Chenyme-AAVT

Install additional Python dependencies (add any needed packages here)

RUN pip3 install streamlit

Run font_data.py and then start the Streamlit app

CMD ["bash", "-c", "python3 project/font_data.py && streamlit run Chenyme-AAVT.py"]

Chenyme commented 2 months ago

Debian is best

linux 下dockerfile如下,未做测试

Base image with CUDA 12.1 support

FROM nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu20.04

Set environment variables to preconfigure tzdata

ENV DEBIAN_FRONTEND=noninteractive ENV TZ=Asia/Shanghai # Replace with your desired timezone

Set the working directory

WORKDIR /app

Install Python 3, pip, ffmpeg, tzdata, and other dependencies

RUN apt-get update && apt-get install -y python3 python3-pip ffmpeg git wget tzdata && apt-get clean && rm -rf /var/lib/apt/lists/*

Set the pip mirror based on the selected region

ARG REGION ENV REGION=${REGION} RUN if [ "$REGION" = "Asia" ]; then pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple ; else pip3 config set global.index-url https://pypi.org/simple ; fi

Install PyTorch, torchvision, and torchaudio with CUDA 12.1 support

RUN pip3 install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121

Clone the required GitHub repository

RUN git clone https://github.com/Chenyme/Chenyme-AAVT.git

Create the model/whisper-large-v3 directory and download required files

RUN mkdir -p /app/Chenyme-AAVT/model/whisper-large-v3 && cd /app/Chenyme-AAVT/model/whisper-large-v3 && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/README.md && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/config.json && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/model.bin && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/preprocessor_config.json && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/tokenizer.json && wget https://hf-mirror.com/Systran/faster-whisper-large-v3/resolve/main/vocabulary.json

Set the working directory to the cloned repository

WORKDIR /app/Chenyme-AAVT

Install additional Python dependencies (add any needed packages here)

RUN pip3 install streamlit

Run font_data.py and then start the Streamlit app

CMD ["bash", "-c", "python3 project/font_data.py && streamlit run Chenyme-AAVT.py"]

感谢!

Chenyme commented 2 months ago

现已支持 Linux Docker 部署~

solitudealma commented 3 weeks ago

系统和容器的cuda需要一致吗