RVC-Project / Retrieval-based-Voice-Conversion-WebUI

Easily train a good VC model with voice data <= 10 mins!
MIT License
24.51k stars 3.61k forks source link

cuda error and runtime error #1369

Open stickbrime opened 1 year ago

stickbrime commented 1 year ago

when i was running the program,this happens: Traceback (most recent call last): File "multiprocessing\process.py", line 315, in _bootstrap File "multiprocessing\process.py", line 108, in run File "E:\RVC-beta-v2-0528\train_nsf_sim_cache_sid_load_pretrain.py", line 170, in run net_g = DDP(net_g, device_ids=[rank]) File "E:\RVC-beta-v2-0528\runtime\lib\site-packages\torch\nn\parallel\distributed.py", line 676, in __init__ _sync_module_states( File "E:\RVC-beta-v2-0528\runtime\lib\site-packages\torch\distributed\utils.py", line 142, in _sync_module_states _sync_params_and_buffers( File "E:\RVC-beta-v2-0528\runtime\lib\site-packages\torch\distributed\utils.py", line 160, in _sync_params_and_buffers dist._broadcast_coalesced( RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile withTORCH_USE_CUDA_DSA` to enable device-side assertions.

Traceback (most recent call last): File "E:\RVC-beta-v2-0528\runtime\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "E:\RVC-beta-v2-0528\runtime\lib\site-packages\gradio\blocks.py", line 1006, in process_api result = await self.call_function(fn_index, inputs, iterator, request) File "E:\RVC-beta-v2-0528\runtime\lib\site-packages\gradio\blocks.py", line 859, in call_function prediction = await anyio.to_thread.run_sync( File "E:\RVC-beta-v2-0528\runtime\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "E:\RVC-beta-v2-0528\runtime\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "E:\RVC-beta-v2-0528\runtime\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "E:\RVC-beta-v2-0528\runtime\lib\site-packages\gradio\utils.py", line 408, in async_iteration return next(iterator) File "E:\RVC-beta-v2-0528\infer-web.py", line 1035, in train1key big_npy = np.concatenate(npys, 0) File "<__array_function__ internals>", line 180, in concatenate ValueError: need at least one array to concatenate` I don't know why this happens,please help me!

RVC-Boss commented 1 year ago

What's your GPU?

stickbrime commented 1 year ago

0 NVIDIA GeForce GT 730 and recently I got another error: Process Process-1: Traceback (most recent call last): File "multiprocessing\process.py", line 315, in _bootstrap File "multiprocessing\process.py", line 108, in run File "C:\RVC0813Nvidia\train_nsf_sim_cache_sid_load_pretrain.py", line 176, in run net_g = DDP(net_g, device_ids=[rank]) File "C:\RVC0813Nvidia\runtime\lib\site-packages\torch\nn\parallel\distributed.py", line 676, in __init__ _sync_module_states( File "C:\RVC0813Nvidia\runtime\lib\site-packages\torch\distributed\utils.py", line 142, in _sync_module_states _sync_params_and_buffers( File "C:\RVC0813Nvidia\runtime\lib\site-packages\torch\distributed\utils.py", line 160, in _sync_params_and_buffers dist._broadcast_coalesced( RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile withTORCH_USE_CUDA_DSA` to enable device-side assertions.

Traceback (most recent call last): File "C:\RVC0813Nvidia\runtime\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "C:\RVC0813Nvidia\runtime\lib\site-packages\gradio\blocks.py", line 1006, in process_api result = await self.call_function(fn_index, inputs, iterator, request) File "C:\RVC0813Nvidia\runtime\lib\site-packages\gradio\blocks.py", line 859, in call_function prediction = await anyio.to_thread.run_sync( File "C:\RVC0813Nvidia\runtime\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\RVC0813Nvidia\runtime\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\RVC0813Nvidia\runtime\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\RVC0813Nvidia\runtime\lib\site-packages\gradio\utils.py", line 408, in async_iteration return next(iterator) File "C:\RVC0813Nvidia\infer-web.py", line 1283, in train1key big_npy = np.concatenate(npys, 0) File "<__array_function__ internals>", line 180, in concatenate ValueError: need at least one array to concatenate`

stickbrime commented 1 year ago

I installed cuda version 12.2

stickbrime commented 1 year ago

in step2,I got this: RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

stickbrime commented 1 year ago

every single time when they are pretrain,I geuss.

Zeze42 commented 9 months ago

Did you fixed it?