(GPTSoVits) ➜ V-Express git:(main) ✗ python inference.py \ --reference_image_path "./test_samples/short_case/tys/ref.jpg" \ --audio_path "./test_samples/short_case/tys/aud.mp3" \ --output_path "./output/short_case/talk_tys_fix_face.mp4" \ --retarget_strategy "fix_face" \ --num_inference_steps 25 Traceback (most recent call last): File "/Users/ga666666/Desktop/V-Express/inference.py", line 277, in <module> main() File "/Users/ga666666/Desktop/V-Express/inference.py", line 139, in main vae = AutoencoderKL.from_pretrained(vae_path).to(dtype=dtype, device=device) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1145, in to return self._apply(convert) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 820, in _apply param_applied = fn(param) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/cuda/__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled (GPTSoVits) ➜ V-Express git:(main) ✗
(GPTSoVits) ➜ V-Express git:(main) ✗ python inference.py \ --reference_image_path "./test_samples/short_case/tys/ref.jpg" \ --audio_path "./test_samples/short_case/tys/aud.mp3" \ --output_path "./output/short_case/talk_tys_fix_face.mp4" \ --retarget_strategy "fix_face" \ --num_inference_steps 25 Traceback (most recent call last): File "/Users/ga666666/Desktop/V-Express/inference.py", line 277, in <module> main() File "/Users/ga666666/Desktop/V-Express/inference.py", line 139, in main vae = AutoencoderKL.from_pretrained(vae_path).to(dtype=dtype, device=device) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1145, in to return self._apply(convert) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 820, in _apply param_applied = fn(param) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "/opt/miniconda3/envs/GPTSoVits/lib/python3.9/site-packages/torch/cuda/__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled (GPTSoVits) ➜ V-Express git:(main) ✗