Open nicola-spreafico opened 1 year ago
GTX970 is too old to run RVC orzzzzzz
Maybe you should change the version of pytorch
GTX970 is too old to run RVC orzzzzzz
Ok thank you, I will look for another PC. For the moment I used a virtual machine on public cloud with "1 x NVIDIA Tesla P100" GPU and it all went fine
gtx3090 also has the same problem
It is the web ui issue. The GET cannot reach to run
The issue can be solved by directly run infer/modules/train/extract_feature_print.py
remember to set parameters for ['extract_f0_rmvpe.py', '2', '0', '0', 'C:\voice-ai\RVC0813Nvidia/logs/cia', 'True']: device = sys.argv[1] n_part = int(sys.argv[2]) i_part = int(sys.argv[3]) ...
I have the same problem in linux on rtx3060 File "/media/iwoolf/BigDrive/anaconda3/envs/rbvc-webui/lib/python3.10/site-packages/torch/autograd/init.py", line 266, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: GET was unable to find an engine to execute this computation
Hello, I downloaded the portable-app from https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/RVC0813Nvidia.7z I have an audio sample with my voice with duration of 15 minutes.
The preprocess-step went fine
The extraction went fine
When using the One-Click-Training button this is the output in web console
But the truth is that in the prompt console where I launched the application I'm getting this error
I also tried to do a model inference with an already bult-in model, but even there I'm getting an errore, and in the console I can see the error:
Can you help me understand how can I solve this issue?
I'm running on Windows 11 22H2 I Have NVIDIA GeForce GTX 970
Thank you