Open c469591 opened 1 year ago
CPU inference of most AI models is already pretty slow as is. I don't think there is any popular machine learning software that has CPU training as an option. It would probably take weeks to get a decent result.
Maybe add later.
@c469591 use google colab
Yes, that's also a solution.
I still hope this will be added to the new version. Thanks!
I still hope this will be added to the new version. Thanks!
Yes. I have the same problem. I want to train a model but I don`t have a GPU.
considering google now banning RVC code, i think something like this is better than nothing.
considering google now banning RVC code, i think something like this is better than nothing.
Google is nuts
Google is nuts
google and greed are one of the same thing unfortunately, you can still run it but only if you have a colab pro.
I personally blame the influx of zoomers abusing the damn thing just to spam youtube with their degenerate garbage, i saw this coming from a mile away.
I would like a cpu only version because I have an intel GPU.
I would like a cpu only version because I have an intel GPU.
You can try the dml
or wait #278
I have a GPU and I want to infer with a CPU, because my GPU gives cuda errors, is there any parameter so that RVC does not detect the GPU?
so can we use cpus to train or not?
It can. At least worked when tested on my linux server.
My server actually has gpu on it. To disable gpu, I added os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
before importing torch in infer/modules/train/train.py
from infer.lib.train import utils
hps = utils.get_hparams()
# os.environ["CUDA_VISIBLE_DEVICES"] = hps.gpus.replace("-", ",")
os.environ["CUDA_VISIBLE_DEVICES"] = "-1" # set this to "-1" if you wanna test cpu training on a server with gpu. Training with cpu does work, but it is extremely slow.
As for the speed, according to my very brief experience, for an epoch which costs 20s on a 1080 Ti GPU, it took about 2min on cpu.
It can. At least worked when tested on my linux server. My server actually has gpu on it. To disable gpu, I added
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
before importing torch ininfer/modules/train/train.py
from infer.lib.train import utils hps = utils.get_hparams() # os.environ["CUDA_VISIBLE_DEVICES"] = hps.gpus.replace("-", ",") os.environ["CUDA_VISIBLE_DEVICES"] = "-1" # set this to "-1" if you wanna test cpu training on a server with gpu. Training with cpu does work, but it is extremely slow.
As for the speed, according to my very brief experience, for an epoch which costs 20s on a 1080 Ti GPU, it took about 2min on cpu.
slow but still useful to ppl who wanna make models
I would like a cpu only version because I have an intel GPU.
You can try the
dml
or wait #278
what is 'dml'?
I'm know I'm late to the game, but I've trained on CPU only with no issues. I let it run on all night on 6 cores and it gets the job done in 8-10 hours on harvest. Converting audio is about a 1:2 ratio. a 10 audio clip tales about 20 seconds to convert.
I'm know I'm late to the game, but I've trained on CPU only with no issues. I let it run on all night on 6 cores and it gets the job done in 8-10 hours on harvest. Converting audio is about a 1:2 ratio. a 10 audio clip tales about 20 seconds to convert.
How did you configure your rvc?
Hello, may I ask if RVC supports training with CPU? Because I don't have a separate graphics card, and I encountered an error in the second step of training, which showed "slow_conv2d_cpu" not implemented for 'Half'. Thank you!