RuntimeError: The expanded size of the tensor (8192) must match the existing size (448) at non-singleton dimension 1. Target sizes: [1, 8192]. Tensor sizes: [448] #77
I meet this question when it train to epoch 2 or epoch 4,I use dataset collocated by myself
INFO:baker_base:====> Epoch: 1
INFO:baker_base:====> Epoch: 2
Traceback (most recent call last):
File "train.py", line 291, in
main()
File "train.py", line 47, in main
mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
File "D:\anaconda\anaconda1\envs\torch\lib\site-packages\torch\multiprocessing\spawn.py", line 240, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "D:\anaconda\anaconda1\envs\torch\lib\site-packages\torch\multiprocessing\spawn.py", line 198, in start_processes
while not context.join():
File "D:\anaconda\anaconda1\envs\torch\lib\site-packages\torch\multiprocessing\spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "D:\anaconda\anaconda1\envs\torch\lib\site-packages\torch\multiprocessing\spawn.py", line 69, in _wrap
fn(i, args)
File "D:\vits_chinese-Yae\train.py", line 116, in run
train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
File "D:\vits_chinese-Yae\train.py", line 164, in train_and_evaluate
y = commons.slice_segments(y, ids_slice hps.data.hop_length, hps.train.segment_size) # slice
File "D:\vits_chinese-Yae\commons.py", line 54, in slice_segments
ret[i] = x[i, :, idx_str:idx_end]
RuntimeError: The expanded size of the tensor (8192) must match the existing size (448) at non-singleton dimension 1. Target sizes: [1, 8192]. Tensor sizes: [448]
I meet this question when it train to epoch 2 or epoch 4,I use dataset collocated by myself
INFO:baker_base:====> Epoch: 1 INFO:baker_base:====> Epoch: 2 Traceback (most recent call last): File "train.py", line 291, in
main()
File "train.py", line 47, in main
mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
File "D:\anaconda\anaconda1\envs\torch\lib\site-packages\torch\multiprocessing\spawn.py", line 240, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "D:\anaconda\anaconda1\envs\torch\lib\site-packages\torch\multiprocessing\spawn.py", line 198, in start_processes
while not context.join():
File "D:\anaconda\anaconda1\envs\torch\lib\site-packages\torch\multiprocessing\spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error: Traceback (most recent call last): File "D:\anaconda\anaconda1\envs\torch\lib\site-packages\torch\multiprocessing\spawn.py", line 69, in _wrap fn(i, args) File "D:\vits_chinese-Yae\train.py", line 116, in run train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) File "D:\vits_chinese-Yae\train.py", line 164, in train_and_evaluate y = commons.slice_segments(y, ids_slice hps.data.hop_length, hps.train.segment_size) # slice File "D:\vits_chinese-Yae\commons.py", line 54, in slice_segments ret[i] = x[i, :, idx_str:idx_end] RuntimeError: The expanded size of the tensor (8192) must match the existing size (448) at non-singleton dimension 1. Target sizes: [1, 8192]. Tensor sizes: [448]