TMElyralab / MuseTalk

MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting
Other
1.94k stars 238 forks source link

CUDA error when run on Mac with cpu #4

Closed hotea closed 3 months ago

hotea commented 3 months ago
python -m scripts.inference --inference_config configs/inference/demo.yaml
add ffmpeg to path
Loads checkpoint by local backend from path: ./models/dwpose/dw-ll_ucoco_384.pth
{'task_1': {'video_path': 'data/video/demo.mov', 'audio_path': 'data/audio/demo.wav', 'bbox_shift': -7}}
/Users/sukai/Documents/ai/MuseTalk/musetalk/whisper/whisper/transcribe.py:79: UserWarning: FP16 is not supported on CPU; using FP32 instead
  warnings.warn("FP16 is not supported on CPU; using FP32 instead")
video in 25 FPS, audio idx in 50FPS
extracting landmarks...time consuming
reading images...
0it [00:00, ?it/s]
get key_landmark and face bounding boxes with the bbox_shift: -7
0it [00:00, ?it/s]
********************************************bbox_shift parameter adjustment**********************************************************
 1 preprocessing.py +                                                                                                                                               X
Traceback (most recent call last):
 1 preprocessing.py +
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/Users/hotea/Documents/ai/MuseTalk/scripts/inference.py", line 145, in <module>
    main(args)
  File "/Users/hotea/Documents/ai/MuseTalk/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/hotea/Documents/ai/MuseTalk/scripts/inference.py", line 115, in main
    combine_frame = get_image(ori_frame,res_frame,bbox)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/hotea/Documents/ai/MuseTalk/musetalk/utils/blending.py", line 41, in get_image
    mask_image = face_seg(face_large)
                 ^^^^^^^^^^^^^^^^^^^^
  File "/Users/hotea/Documents/ai/MuseTalk/musetalk/utils/blending.py", line 17, in face_seg
    seg_image = fp(image)
                ^^^^^^^^^
  File "/Users/hotea/Documents/ai/MuseTalk/musetalk/utils/face_parsing/__init__.py", line 38, in __call__
    img = torch.unsqueeze(img, 0).cuda()
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/hotea/Documents/ai/MuseTalk/venv/lib/python3.12/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled