Bug Description
Error cuda out of mem when trying to train a model but i have a 3090
f0D40k.pth
Process Process-1:
Traceback (most recent call last):
File "/home/ed/mambaforge/envs/applio/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/home/ed/mambaforge/envs/applio/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/ed/Applio/rvc/train/train.py", line 258, in run
train_and_evaluate(
File "/home/ed/Applio/rvc/train/train.py", line 457, in train_and_evaluate
scaler.scale(loss_disc).backward()
File "/home/ed/mambaforge/envs/applio/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/home/ed/mambaforge/envs/applio/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
**Desktop Details:**
- Operating System: [WSL UBUNTU ]
- Browser: [Firefox]
Bug Description Error cuda out of mem when trying to train a model but i have a 3090 f0D40k.pth