-
Hi
First time i am trying to use Face2lip i think all is ok but i have this issue !
--face argument must be a valid path to video/image file
Thanx for your help !!!
-
hi Dan, I have thoughts on creating a video in autovidos, hope this can help you
AudioLDM: which can create background audio for your video
Generate speech, sound effects, music and beyond, with tex…
-
预训练模型wav2lip_hq.pdparams大概139M,我使用的训练代码:
```
export CUDA_VISIBLE_DEVICES=0,1,2,3
python -m paddle.distributed.launch \
tools/main.py \
--config-file configs/wav2lip_hq.yaml \
```
在outpu…
-
My Stable Diffusion webUI is installed in a conda virtual environment, as long as /sd-wav2lip-uhq is installed.
我的Stable Diffusion webUI是在conda的虚拟环境下安装的,只要安装了/sd-wav2lip-uhq插件,就会打不开,是不是和虚拟环境冲突了?
-
when I run the project in the colab,
i will show the error below:
Using cuda for inference.
Reading video frames...
Number of frames available for inference: 223
Traceback (most recent call l…
-
https://user-images.githubusercontent.com/7344969/165025748-8f46ff08-7a33-43e9-86c8-9b604caa2fb8.mp4
-
1. 25fps, 16000Hz, less than 5s
2. Run run_pipeline.py (in syncnet_python)
3. Run run_syncnet.py. (in syncnet_python) Then determine the value of AV offset. If it is between [-1,1], keep the video.…
-
耗时太长了,不用去看代码,就知道水分很多。
-
```
import paddlehub as hub
import os
# 获取当前文件夹路径
parent = os.path.dirname(os.path.abspath(__file__)) + "/"
module = hub.Module(name="wav2lip")
face_input_path = parent + "1.jpg"
audio_…
-
I have captured dynamic images from the original video and the new video.
This is the original picture:
![original](https://github.com/anothermartz/Easy-Wav2Lip/assets/8635721/ee72d2f5-0f6a-48f7-b30…