Closed arikhalperin closed 2 years ago
Hi @arikhalperin,
The error means you are out or memory.
You can either:
1) use smaller batch size.
2) use smaller model. You can do that by changing the swave.R
parameter (default is 6).
3) use smaller segment size. You can do that by changing segment
parameter (default is 4).
It turns out my files were too big I think. I splitted them and now it seems to be running correctly.
RuntimeError: Error opening 'debug/mix/2803-161169-0001_7976-110523-0004.wav': System error. how can i solve it?
Hello, Got this error during training:
[2022-02-02 16:48:03,855][svoice.solver][INFO] - Cross validation... [2022-02-02 16:48:04,219][main][ERROR] - Some error happened Traceback (most recent call last): File "train.py", line 120, in main _main(args) File "train.py", line 114, in _main run(args) File "train.py", line 95, in run solver.train() File "/home/ec2-user/speaker_seperation/svoice/svoice/solver.py", line 135, in train valid_loss = self._run_one_epoch(epoch, cross_valid=True) File "/home/ec2-user/speaker_seperation/svoice/svoice/solver.py", line 199, in _run_one_epoch estimate_source = self.dmodel(mixture) File "/home/ec2-user/speaker_seperation/svoice/venv/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, kwargs) File "/home/ec2-user/speaker_seperation/svoice/venv/lib64/python3.7/site-packages/torch/nn/parallel/distributed.py", line 511, in forward output = self.module(*inputs[0], *kwargs[0]) File "/home/ec2-user/speaker_seperation/svoice/venv/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(input, kwargs) File "/home/ec2-user/speaker_seperation/svoice/svoice/models/swave.py", line 253, in forward output_all = self.separator(mixture_w) File "/home/ec2-user/speaker_seperation/svoice/venv/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, kwargs) File "/home/ec2-user/speaker_seperation/svoice/svoice/models/swave.py", line 214, in forward output_all = self.rnn_model(enc_segments) File "/home/ec2-user/speaker_seperation/svoice/venv/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, *kwargs) File "/home/ec2-user/speaker_seperation/svoice/svoice/models/swave.py", line 108, in forward row_output = self.rows_grnni File "/home/ec2-user/speaker_seperation/svoice/venv/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(input, kwargs) File "/home/ec2-user/speaker_seperation/svoice/svoice/models/swave.py", line 43, in forward rnnoutput, = self.rnn(output) File "/home/ec2-user/speaker_seperation/svoice/venv/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/speaker_seperation/svoice/venv/lib64/python3.7/site-packages/torch/nn/modules/rnn.py", line 577, in forward self.dropout, self.training, self.bidirectional, self.batch_first) RuntimeError: CUDA out of memory. Tried to allocate 7.38 GiB (GPU 0; 15.78 GiB total capacity; 6.89 GiB already allocated; 5.82 GiB free; 8.77 GiB reserved in total by PyTorch) [2022-02-02 16:48:04,627][svoice.executor][ERROR] - Worker 3 died, killing all workers
Any idea how to resolve this?
Thanks, Arik Halperin