OpenTalker / video-retalking

[SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild
https://opentalker.github.io/video-retalking/
Apache License 2.0
6.44k stars 953 forks source link

RuntimeError: PytorchStreamReader failed reading file data/36: file read failed #225

Closed ivoidcat closed 6 months ago

ivoidcat commented 6 months ago

python inference.py --face examples/face/1.mp4 --audio examples/audio/1.wav --outfile results/1_1.mp4 E:\Users\voidcat\anaconda3\envs\video_retalking\lib\site-packages\torchvision\transforms\functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional. warnings.warn( [Info] Using cuda for inference. [Step 0] Number of frames available for inference: 135 [Step 1] Landmarks Extraction in Video. Traceback (most recent call last): File "E:\shuziren\video-retalking\inference.py", line 345, in main() File "E:\shuziren\video-retalking\inference.py", line 81, in main kp_extractor = KeypointExtractor() File "E:\shuziren\video-retalking\third_part\face3d\extract_kp_videos.py", line 18, in init self.detector = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device=device) File "E:\Users\voidcat\anaconda3\envs\video_retalking\lib\site-packages\face_alignment\api.py", line 84, in init self.face_alignment_net = torch.jit.load( File "E:\Users\voidcat\anaconda3\envs\video_retalking\lib\site-packages\torch\jit_serialization.py", line 162, in load cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files, _restore_shapes) # type: ignore[call-arg] RuntimeError: PytorchStreamReader failed reading file data/36: file read failed

ivoidcat commented 6 months ago

好了,解决了

dandansocrates commented 6 months ago

我也有同样的问题。你是如何解决的?