Here is the error I get when trying to run this after installation:
A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
File "C:\AI\Hallo\hallo-for-windows\venv\lib\site-packages\xformers__init__.py", line 55, in _is_triton_available
from xformers.triton.softmax import softmax as triton_softmax # noqa
File "C:\AI\Hallo\hallo-for-windows\venv\lib\site-packages\xformers\triton\softmax.py", line 11, in
import triton
ModuleNotFoundError: No module named 'triton'
WARNING:py.warnings:C:\AI\Hallo\hallo-for-windows\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
warnings.warn(
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0
set det-size: (640, 640)
WARNING:py.warnings:C:\AI\Hallo\hallo-for-windows\venv\lib\site-packages\insightface\utils\transform.py:68: FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass rcond=None, to keep using the old, explicitly pass rcond=-1.
P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
W0000 00:00:1719463778.519268 20104 face_landmarker_graph.cc:174] Sets FaceBlendshapesGraph acceleration to xnnpack by default.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
W0000 00:00:1719463778.529072 26848 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
W0000 00:00:1719463778.539180 11684 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors.
WARNING:py.warnings:C:\AI\Hallo\hallo-for-windows\venv\lib\site-packages\google\protobuf\symbol_database.py:55: UserWarning: SymbolDatabase.GetPrototype() is deprecated. Please use message_factory.GetMessageClass() instead. SymbolDatabase.GetPrototype() will be removed soon.
warnings.warn('SymbolDatabase.GetPrototype() is deprecated. Please '
Processed and saved: ./.cache\1_sep_background.png
Processed and saved: ./.cache\1_sep_face.png
Some weights of Wav2VecModel were not initialized from the model checkpoint at ./pretrained_models/wav2vec/wav2vec2-base-960h and are newly initialized: ['wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original0', 'wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original1', 'wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
INFO:audio_separator.separator.separator:Separator version 0.17.2 instantiating with output_dir: ./.cache\audio_preprocess, output_format: WAV
INFO:audio_separator.separator.separator:Operating System: Windows 10.0.22631
INFO:audio_separator.separator.separator:System: Windows Node: SI24 Release: 10 Machine: AMD64 Proc: Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
INFO:audio_separator.separator.separator:Python Version: 3.10.11
INFO:audio_separator.separator.separator:PyTorch Version: 2.2.2+cu121
INFO:audio_separator.separator.separator:FFmpeg installed: ffmpeg version 2024-05-02-git-71669f2ad5-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers
INFO:audio_separator.separator.separator:ONNX Runtime CPU package installed with version: 1.18.0
INFO:audio_separator.separator.separator:CUDA is available in Torch, setting Torch device to CUDA
WARNING:audio_separator.separator.separator:CUDAExecutionProvider not available in ONNXruntime, so acceleration will NOT be enabled
INFO:audio_separator.separator.separator:Loading model Kim_Vocal_2.onnx...
INFO:audio_separator.separator.separator:Load model duration: 00:00:00
INFO:audio_separator.separator.separator:Starting separation process for audio_file_path: examples/driving_audios/1.wav
LLVM ERROR: Symbol not found: __svml_cosf8_ha
Here is the error I get when trying to run this after installation:
A matching Triton is not available, some optimizations will not be enabled Traceback (most recent call last): File "C:\AI\Hallo\hallo-for-windows\venv\lib\site-packages\xformers__init__.py", line 55, in _is_triton_available from xformers.triton.softmax import softmax as triton_softmax # noqa File "C:\AI\Hallo\hallo-for-windows\venv\lib\site-packages\xformers\triton\softmax.py", line 11, in
import triton
ModuleNotFoundError: No module named 'triton'
WARNING:py.warnings:C:\AI\Hallo\hallo-for-windows\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
warnings.warn(
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: ./pretrained_models/face_analysis\models\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: ./pretrained_models/face_analysis\models\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: ./pretrained_models/face_analysis\models\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: ./pretrained_models/face_analysis\models\glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5 Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}} find model: ./pretrained_models/face_analysis\models\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0 set det-size: (640, 640) WARNING:py.warnings:C:\AI\Hallo\hallo-for-windows\venv\lib\site-packages\insightface\utils\transform.py:68: FutureWarning:
rcond
parameter will change to the default of machine precision timesmax(M, N)
where M and N are the input matrix dimensions. To use the future default and silence this warning we advise to passrcond=None
, to keep using the old, explicitly passrcond=-1
. P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4WARNING: All log messages before absl::InitializeLog() is called are written to STDERR W0000 00:00:1719463778.519268 20104 face_landmarker_graph.cc:174] Sets FaceBlendshapesGraph acceleration to xnnpack by default. INFO: Created TensorFlow Lite XNNPACK delegate for CPU. W0000 00:00:1719463778.529072 26848 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors. W0000 00:00:1719463778.539180 11684 inference_feedback_manager.cc:114] Feedback manager requires a model with a single signature inference. Disabling support for feedback tensors. WARNING:py.warnings:C:\AI\Hallo\hallo-for-windows\venv\lib\site-packages\google\protobuf\symbol_database.py:55: UserWarning: SymbolDatabase.GetPrototype() is deprecated. Please use message_factory.GetMessageClass() instead. SymbolDatabase.GetPrototype() will be removed soon. warnings.warn('SymbolDatabase.GetPrototype() is deprecated. Please '
Processed and saved: ./.cache\1_sep_background.png Processed and saved: ./.cache\1_sep_face.png Some weights of Wav2VecModel were not initialized from the model checkpoint at ./pretrained_models/wav2vec/wav2vec2-base-960h and are newly initialized: ['wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original0', 'wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original1', 'wav2vec2.masked_spec_embed'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. INFO:audio_separator.separator.separator:Separator version 0.17.2 instantiating with output_dir: ./.cache\audio_preprocess, output_format: WAV INFO:audio_separator.separator.separator:Operating System: Windows 10.0.22631 INFO:audio_separator.separator.separator:System: Windows Node: SI24 Release: 10 Machine: AMD64 Proc: Intel64 Family 6 Model 183 Stepping 1, GenuineIntel INFO:audio_separator.separator.separator:Python Version: 3.10.11 INFO:audio_separator.separator.separator:PyTorch Version: 2.2.2+cu121 INFO:audio_separator.separator.separator:FFmpeg installed: ffmpeg version 2024-05-02-git-71669f2ad5-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers INFO:audio_separator.separator.separator:ONNX Runtime CPU package installed with version: 1.18.0 INFO:audio_separator.separator.separator:CUDA is available in Torch, setting Torch device to CUDA WARNING:audio_separator.separator.separator:CUDAExecutionProvider not available in ONNXruntime, so acceleration will NOT be enabled INFO:audio_separator.separator.separator:Loading model Kim_Vocal_2.onnx... INFO:audio_separator.separator.separator:Load model duration: 00:00:00 INFO:audio_separator.separator.separator:Starting separation process for audio_file_path: examples/driving_audios/1.wav LLVM ERROR: Symbol not found: __svml_cosf8_ha