Welcome to the HeyGenClone, an open-source analogue of the HeyGen system.
I am a developer from Moscow π·πΊ who devotes his free time to studying new technologies. The project is in an active development phase, but I hope it will help you achieve your goals!
Currently, translation support is enabled only from English π¬π§!
cd path_to_project
sh install.sh
Key | Description |
---|---|
DET_TRESH | Face detection treshtold [0.0:1.0] |
DIST_TRESH | Face embeddings distance treshtold [0.0:1.0] |
HF_TOKEN | Your HuggingFace token (see Installation) |
USE_ENHANCER | Do we need to improve faces using GFPGAN? |
ADD_SUBTITLES | Subtitles in the output video |
English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu) and Korean (ko)
conda activate your_env_name
cd path_to_project
At the root of the project there is a translate script that translates the video you set.
python translate.py video_filename output_language -o output_filename
I also added a script to overlay the voice on the video with lip sync, which allows you to create a video with a person pronouncing your speech. Π‘urrently it works for videos with one person.
python speech_changer.py voice_filename video_filename -o output_filename
Note that this example was created without GFPGAN usage! | Destination language | Source video | Output video |
---|---|---|---|
π·πΊ (Russian) |