Closed sunyuhan19981208 closed 1 year ago
If you are using an NVIDIA GPU, that is what will be used. What is the model of your GPU?
If you are using an NVIDIA GPU, that is what will be used. What is the model of your GPU?
Yes, i am using NVIDIA GPU, I found that gpu is used but the power is vert low, does it need optimized?
While high cpu memory is being used, gpu memory seems to be used very low. I am using sovits model I download from hugging face: https://huggingface.co/TachibanaKimika/so-vits-svc-4.0-models/tree/main/aisa
Are you using the prebuilt Windows binaries? I had the same issue until I cloned the repo and ran the server through WSL. My CUDA device was being correctly detected, but it would use my CPU regardless if GPU was set to 0.
yes, i am using prebuilt Windows binaries
tell the name of the GPU. 16x ?
tell the name of the GPU. 16x ?
3060
can you show me a log in terminal? or can you find the string such as "VoiceChanger Initialized (GPU_NUM:1, mps_enabled:False)". if GPU_NUM is 0, your GPU is not detected by the software.
can you show me a log in terminal? or can you find the string such as "VoiceChanger Initialized (GPU_NUM:1, mps_enabled:False)". if GPU_NUM is 0, your GPU is not detected by the software.
I am happy to assist you with that task. However, as I am currently at my office, my personal computer is located at my home. I will be able to complete the task for you later tonight when I am back home.
can you show me a log in terminal? or can you find the string such as "VoiceChanger Initialized (GPU_NUM:1, mps_enabled:False)". if GPU_NUM is 0, your GPU is not detected by the software.
D:\Downloads\MMVCServerSIO_win_onnxgpu-cuda_v.1.5.2.7\MMVCServerSIO>MMVCServerSIO.exe -p 18888 --https false --content_vec_500 checkpoint_best_legacy_500.pt --content_vec_500_onnx checkpoint_best_legacy_500.onnx --content_vec_500_onnx_on false --hubert_base chinese-hubert-large-fairseq-ckpt.pt --hubert_soft hubert-soft-0d54a1f4.pt --nsf_hifigan nsf_hifigan/model Booting PHASE :main Voice Changerを起動しています。 Internal_Port:18888 protocol: HTTP
ブラウザで次のURLを開いてください.
http://<IP>:<PORT>/
多くの場合は次のいずれかのURLにアクセスすると起動します。
http://localhost:18888/
Booting PHASE :__main__
Booting PHASE :MMVCServerSIO
VoiceChanger Initialized (GPU_NUM:1, mps_enabled:False) voiceChangerManager <voice_changer.VoiceChangerManager.VoiceChangerManager object at 0x0000021BFD7FEE90>
[16932:0513/121420.528:ERROR:gpu_init.cc(523)] Passthrough is not supported, GL is disabled, ANGLE is
OK, GPU is detected by the software. Perhaps that is a reasonable move in terms of GPU performance.
And if your cpu keep high usage, set F0 Detector to dio. harverst is high guality but very very heavy.
OK, GPU is detected by the software. Perhaps that is a reasonable move in terms of GPU performance.
And if your cpu keep high usage, set F0 Detector to dio. harverst is high guality but very very heavy.
YES, my CPU keeps high usage, so does my cpu memory.
D:\xiazai\w-okada\MMVCServerSIO>MMVCServerSIO.exe -p 18888 --https false --content_vec_500 pretrain/checkpoint_best_legacy_500.pt --content_vec_500_onnx pretrain/checkpoint_best_legacy_500.onnx --content_vec_500_onnx_on false --hubert_base pretrain/hubert_base.pt --hubert_base_jp pretrain/rinna_hubert_base_jp.pt --hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt --nsf_hifigan pretrain/nsf_hifigan/model --model_dir model_dir --samples samples.json Booting PHASE :main Voice Changerを起動しています。 ++[Voice Changer] model_dir is already exists. skil download samples. Internal_Port:18888 protocol: HTTP
ブラウザで次のURLを開いてください.
http://<IP>:<PORT>/
多くの場合は次のいずれかのURLにアクセスすると起動します。
http://localhost:18888/
Booting PHASE :__main__
Booting PHASE :MMVCServerSIO
VoiceChanger Initialized (GPU_NUM:1, mps_enabled:False) [7876:0113/114955.928:ERROR:gpu_init.cc(523)] Passthrough is not supported, GL is disabled, ANGLE is
what should i do,please
GPU. 3060,too
During inference, the GPU is not being utilized, which is causing slower inference times. This is a concern, especially when dealing with large datasets and complex models. It appears that the inference code is not set up to take advantage of the GPU's processing power, even though the system has a GPU available.
Expected Outcome: We need to modify the inference code to ensure that the GPU is utilized during inference. This will improve inference times and allow us to handle larger datasets and more complex models.
Additional Information: The system has a compatible GPU installed and the necessary libraries are available. The issue appears to be related to the code itself, rather than a hardware or library compatibility issue.