VideoLingo is an all-in-one video translation, localization, and dubbing tool aimed at generating Netflix-quality subtitles. It eliminates stiff machine translations and multi-line subtitles while adding high-quality dubbing, enabling global knowledge sharing across language barriers.
Key features:
🎥 YouTube video download via yt-dlp
🎙️ Word-level subtitle recognition with WhisperX
📝 NLP and GPT-based subtitle segmentation
📚 GPT-generated terminology for coherent translation
🔄 3-step direct translation, reflection, and adaptation for professional-level quality
✅ Netflix-standard single-line subtitles only
🗣️ Dubbing alignment with GPT-SoVITS and other methods
🚀 One-click startup and output in Streamlit
📝 Detailed logging with progress resumption
Difference from similar projects: Single-line subtitles only, superior translation quality, seamless dubbing experience
### Russian Translation --- https://github.com/user-attachments/assets/25264b5b-6931-4d39-948c-5a1e4ce42fa7 | ### GPT-SoVITS Dubbing --- https://github.com/user-attachments/assets/47d965b2-b4ab-4a0b-9d08-b49a7bf3508c |
Input Language Support(more to come):
🇺🇸 English 🤩 | 🇷🇺 Russian 😊 | 🇫🇷 French 🤩 | 🇩🇪 German 🤩 | 🇮🇹 Italian 🤩 | 🇪🇸 Spanish 🤩 | 🇯🇵 Japanese 😐 | 🇨🇳 Chinese* 😊
*Chinese uses a separate punctuation-enhanced whisper model
Translation supports all languages, while dubbing language depends on the chosen TTS method.
Windows users can double-click OneKeyInstall&Start.bat
to install (requires Git). The script will download Miniconda and install the complete environment. For NVIDIA GPU users, you need to first install CUDA 12.6 and CUDNN 9.3.0, then add C:\Program Files\NVIDIA\CUDNN\v9.3\bin\12.6
to system environment variables and restart.
MacOS/Linux users should install from source, requiring python=3.10.0
environment.
git clone https://github.com/Huanshere/VideoLingo.git
cd VideoLingo
conda create -n videolingo python=3.10.0
conda activate videolingo
python install.py
streamlit run st.py
Alternatively, you can use Docker (requires CUDA 12.4 and NVIDIA Driver version >550), see Docker docs:
docker build -t videolingo .
docker run -d -p 8501:8501 --gpus all videolingo
The project supports OpenAI-Like API format and various dubbing interfaces:
claude-3-5-sonnet-20240620
, gemini-1.5-pro-002
, gpt-4o
, qwen2.5-72b-instruct
, deepseek-coder
, ... (sorted by performance)azure-tts
, openai-tts
, siliconflow-fishtts
, fish-tts
, GPT-SoVITS
For detailed installation, API configuration, and batch mode instructions, please refer to the documentation: English | 中文
WhisperX transcription performance may be affected by video background noise, as it uses wav2vac model for alignment. However, WhisperX can still solve 99% of Whisper's hallucination issues.
Using weaker models can lead to errors during intermediate processes due to strict JSON format requirements for responses. If this error occurs, please delete the output
folder and retry with a different LLM, otherwise repeated execution will read the previous erroneous response causing the same error.
The dubbing feature may not be 100% perfect due to differences in speech rates and intonation between languages, as well as the impact of the translation step. However, this project has implemented extensive engineering processing for speech rates to ensure the best possible dubbing results.
Multilingual video transcription recognition will only retain the main language. This is because whisperX uses a specialized model for a single language when forcibly aligning word-level subtitles, and will delete unrecognized languages.
Cannot dub multiple characters separately, as whisperX's speaker distinction capability is not sufficiently reliable.
This project is licensed under the Apache 2.0 License. Special thanks to the following open source projects for their contributions:
whisperX, yt-dlp, json_repair, BELLE
If you find VideoLingo helpful, please give us a ⭐️!