Open 34j opened 1 year ago
I'm trying to get this version working. I've installed the CPU version of torch because there is not installation for ROCm on Win.
After installing with pip install -U git+https://github.com/34j/so-vits-svc-fork.git@feat/openml
this error occurs after using svc train
Traceback (most recent call last):
File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "D:\Files\Code\python\so-vits-svc-fork\venv\Scripts\svc.exe\__main__.py", line 7, in <module>
File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fork\__main__.py", line 130, in train
train(config_path=config_path, model_path=model_path)
File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fork\train.py", line 41, in train
raise RuntimeError("CUDA is not available.")
RuntimeError: CUDA is not available.
After commenting the two lines in train.py
#if not torch.cuda.is_available():
#raise RuntimeError("CUDA is not available.")
This is the output of the command. The training does not start at all.
(venv) PS D:\Files\Code\python\so-vits-svc-fork> svc train
[17:19:29] INFO [17:19:29] Version: 1.3.3 __main__.py:49
Downloading D_0.pth: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 178M/178M [00:04<00:00, 41.2MiB/s]
Downloading G_0.pth: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 172M/172M [00:03<00:00, 47.8MiB/s]
(venv) PS D:\Files\Code\python\so-vits-svc-fork>
Could you remove that part, pip install torch-openml
and try again?
Nothing found, I tried with pip install openml-pytorch
but nothing changed
Understood. I failed.
Sure I did it, in fact no CUDA error is printed, but the command does nothing, it ends straight away
https://github.com/34j/so-vits-svc-fork/blob/d7ab45010e73e63b765e9bd24c2635c2d18239be/src/so_vits_svc_fork/utils.py#L31
Can you check if this part is working? print(devices)
Did svc pre-hubert
work correctly?
Yes hubert worked but no GPU was used
But in this branch realtime inference works on GPU
edit. Not really
@allcontributors add pierluigizagaria userTesting
@34j
I've put up a pull request to add @pierluigizagaria! :tada:
It seems difficult to support, so I give up.
Can this project work with 3.11? I'm trying to install torch-mlir that should make torch compatible my AMD GPU on Windows
I've already tried using torch-directml but got the error mentioned here https://github.com/microsoft/DirectML/issues/400
> pipdeptree --reverse --packages llvmlite
Warning!!! Possibly conflicting dependencies found:
* poetry==1.4.2
- platformdirs [required: >=2.5.2,<3.0.0, installed: 3.2.0]
------------------------------------------------------------------------
Warning!! Cyclic dependencies found:
* poetry-plugin-export => poetry => poetry-plugin-export
* poetry => poetry-plugin-export => poetry
------------------------------------------------------------------------
llvmlite==0.39.1
- numba==0.56.4 [requires: llvmlite>=0.39.0dev0,<0.40]
- librosa==0.9.1 [requires: numba>=0.45.1]
- so-vits-svc-fork==3.0.4 [requires: librosa]
- torchcrepe==0.0.18 [requires: librosa==0.9.1]
- so-vits-svc-fork==3.0.4 [requires: torchcrepe>=0.0.17]
- resampy==0.4.2 [requires: numba>=0.53]
- librosa==0.9.1 [requires: resampy>=0.2.2]
- so-vits-svc-fork==3.0.4 [requires: librosa]
- torchcrepe==0.0.18 [requires: librosa==0.9.1]
- so-vits-svc-fork==3.0.4 [requires: torchcrepe>=0.0.17]
- scikit-maad==1.3.12 [requires: resampy>=0.2]
- so-vits-svc-fork==3.0.4 [requires: scikit-maad]
- torchcrepe==0.0.18 [requires: resampy]
- so-vits-svc-fork==3.0.4 [requires: torchcrepe>=0.0.17]
3.10 is not supported for the above reasons, but won't it work with 3.11?
I got an error while trying to install on 3.11
Sorry, my typo, I was trying to ask if torch-mlir
would work with 3.10.
They don't provide compiled Windows versions on 3.10
Since both inference and training rely on librosa as of now, 3.11 support is not possible.
Installing the rc version of numba may allow it to be used with Python 3.11, but may cause other problems (I haven't tried it) (https://github.com/numba/numba/issues/8841)
Would it be possible to run this using directml? (although I've only gotten directml to work on python 3.10.6, haven't tried it on newer versions)
microsoft/DirectML#400
any update on this?
Is your feature request related to a problem? Please describe. AMD GPUs not supported on Windows
Describe the solution you'd like AMD GPUs not supported on Windows
Additional context