High performance RVC inferencing, intended for multiple instances in memory at once. Also includes the latest pitch estimator RMVPE, Python 3.8-3.11 compatible, pip installable, memory + performance improvements in the pipeline and model usage.
I'm trying to do the example in the readme with my CPU and I get the below error
Traceback (most recent call last):
File "C:\Users\kuzum\git\pth_inferer\infer_and_serve.py", line 2, in <module>
from inferrvc import RVC
File "C:\Program Files\Python311\Lib\site-packages\inferrvc\__init__.py", line 10, in <module>
from .modules import RVC,ResampleCache,download_models,load_torchaudio
File "C:\Program Files\Python311\Lib\site-packages\inferrvc\modules.py", line 19, in <module>
from .pipeline import Pipeline
File "C:\Program Files\Python311\Lib\site-packages\inferrvc\pipeline.py", line 31, in <module>
bh,ah=torch.from_numpy(bh).to(_gpu,non_blocking=True),torch.from_numpy(ah).to(_gpu,non_blocking=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\kuzum\AppData\Roaming\Python\Python311\site-packages\torch\cuda\__init__.py", line 293, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
I'm trying to do the example in the readme with my CPU and I get the below error
Presumably from this line https://github.com/CircuitCM/RVC-inference/blob/dea0ec905b79d6e2648369bcd7ba5aca79d82785/inferrvc/pipeline.py#L31