Open wjnodejs opened 10 months ago
Has the same problem been resolved?
@Linjiahua Perhaps you could try this mirror: docker pull pytorch/pytorch:2.1.0-cuda12.1-cudnn8-devel. Atfer run the image and clone this repository, I used the following commands to set up the environment and run it, and finally, it was successful:
apt-get update
apt install ffmpeg
pip install -r requirements.txt
python3 inference.py --face examples/face/1.mp4 --audio examples/audio/1.wav --outfile results/1_1.mp4
@wjnodejs @Linjiahua 我也遇到了同样的错误,我将while idx < len(img_np_list)//2 :
改成了while idx < len(img_np_list):
解决了这个问题。
I also encountered the same error, and I solved it by changing while idx < len(img_np_list)//2:
to while idx < len(img_np_list):
.
(video-retalking) C:\Users\fwq\Desktop\video-retalking>python inference.py --face examples/face/car.jpg --audio examples/audio/car.m4a --outfile results/car_image.mp4 --face_det_batch_size 2 --LNet_batch_size 2 C:\Users\fwq\anaconda3\envs\video-retalking\lib\site-packages\torchvision\transforms\functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional. warnings.warn( [Info] Using cuda for inference. [Step 0] Number of frames available for inference: 1 Traceback (most recent call last): File "inference.py", line 345, in
main()
File "inference.py", line 69, in main
full_frames_RGB, crop, quad = croper.crop(full_frames_RGB, xsize=512)
File "C:\Users\fwq\Desktop\video-retalking\utils\ffhq_preprocess.py", line 126, in crop
if lm is None:
UnboundLocalError: local variable 'lm' referenced before assignment