CircuitCM / RVC-inference

High performance RVC inferencing, intended for multiple instances in memory at once. Also includes the latest pitch estimator RMVPE, Python 3.8-3.11 compatible, pip installable, memory + performance improvements in the pipeline and model usage.
MIT License
19 stars 3 forks source link

Possible to run on CPU? #2

Open Kiyuma opened 7 months ago

Kiyuma commented 7 months ago

I tried to run the code on colab CPU, when it got to the from inferrvc import RVC, it give out this error:

AssertionError                            Traceback (most recent call last)
[<ipython-input-7-90dde636fab7>](https://localhost:8080/#) in <cell line: 1>()
----> 1 from inferrvc import RVC
      2 rvc_model=RVC('model_e370_s2960.pth',index='added_IVF161_Flat_nprobe_1_model_v2.index'),RVC(model='personal')
      3 
      4 print(rvc_model.name)
      5 print('Paths',rvc_model.model_path,rvc_model.index_path)

3 frames
[/usr/local/lib/python3.10/dist-packages/torch/cuda/__init__.py](https://localhost:8080/#) in _lazy_init()
    291             )
    292         if not hasattr(torch._C, "_cuda_getDeviceCount"):
--> 293             raise AssertionError("Torch not compiled with CUDA enabled")
    294         if _cudart is None:
    295             raise AssertionError(

AssertionError: Torch not compiled with CUDA enabled

Does the inference support running cpu? If not then is it possible for you to implement it?

Lunatunny commented 5 months ago

I'm getting this too. The readme says it defaults to CPU so it's supposed to work on CPU, but it looks like it tries to do stuff with the GPU right after checking which device to use which makes torch throw?

CircuitCM commented 5 months ago

Thanks for the bug report, I'll resolve this week.