(so-vits-fork) C:\Users\BranchScope\so-vits-svc-fork\tests\dataset_raw\mina>svc train
Downloading D_0.pth: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 178M/178M [00:34<00:00, 5.41MiB/s]
Downloading G_0.pth: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200M/200M [00:43<00:00, 4.77MiB/s]
[20:27:25] INFO [20:27:25] Using strategy: auto train.py:98
INFO: GPU available: True (cuda), used: True
INFO [20:27:25] GPU available: True (cuda), used: True rank_zero.py:53
INFO: TPU available: False, using: 0 TPU cores
INFO [20:27:25] TPU available: False, using: 0 TPU cores rank_zero.py:53
INFO: IPU available: False, using: 0 IPUs
INFO [20:27:25] IPU available: False, using: 0 IPUs rank_zero.py:53
INFO: HPU available: False, using: 0 HPUs
INFO [20:27:25] HPU available: False, using: 0 HPUs rank_zero.py:53
WARNING [20:27:25] C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\so_vits_svc_fork\modules\synthesizers.py:81: UserWarning: Unused warnings.py:109
arguments: {'n_layers_q': 3, 'use_spectral_norm': False, 'pretrained': {'D_0.pth':
'https://huggingface.co/datasets/ms903/sovits4.0-768vec-layer12/resolve/main/sovits_768l12_pre_large_320k/clean_D_320000.pth', 'G_0.pth':
'https://huggingface.co/datasets/ms903/sovits4.0-768vec-layer12/resolve/main/sovits_768l12_pre_large_320k/clean_G_320000.pth'}}
warnings.warn(f"Unused arguments: {kwargs}")
INFO [20:27:25] Decoder type: hifi-gan synthesizers.py:100
[20:27:26] WARNING [20:27:26] C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\so_vits_svc_fork\utils.py:246: UserWarning: Keys not found in warnings.py:109
checkpoint state dict:['emb_g.weight']
warnings.warn(f"Keys not found in checkpoint state dict:" f"{not_in_from}")
WARNING [20:27:26] C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\so_vits_svc_fork\utils.py:264: UserWarning: Shape mismatch: warnings.py:109
['dec.cond.weight: torch.Size([512, 256, 1]) -> torch.Size([512, 768, 1])', 'enc_q.enc.cond_layer.weight_v: torch.Size([6144, 256, 1]) ->
torch.Size([6144, 768, 1])', 'flow.flows.0.enc.cond_layer.weight_v: torch.Size([1536, 256, 1]) -> torch.Size([1536, 768, 1])',
'flow.flows.2.enc.cond_layer.weight_v: torch.Size([1536, 256, 1]) -> torch.Size([1536, 768, 1])', 'flow.flows.4.enc.cond_layer.weight_v:
torch.Size([1536, 256, 1]) -> torch.Size([1536, 768, 1])', 'flow.flows.6.enc.cond_layer.weight_v: torch.Size([1536, 256, 1]) ->
torch.Size([1536, 768, 1])', 'f0_decoder.cond.weight: torch.Size([192, 256, 1]) -> torch.Size([192, 768, 1])']
warnings.warn(
INFO [20:27:26] Loaded checkpoint 'logs\44k\G_0.pth' (epoch 0) utils.py:307
INFO [20:27:26] Loaded checkpoint 'logs\44k\D_0.pth' (epoch 0) utils.py:307
INFO: LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO [20:27:26] LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] cuda.py:58
┏━━━┳━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┓
┃ ┃ Name ┃ Type ┃ Params ┃
┡━━━╇━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━┩
│ 0 │ net_g │ SynthesizerTrn │ 45.6 M │
│ 1 │ net_d │ MultiPeriodDiscriminator │ 46.7 M │
└───┴───────┴──────────────────────────┴────────┘
Trainable params: 92.4 M
Non-trainable params: 0
Total params: 92.4 M
Total estimated model params size (MB): 369
[20:27:31] WARNING [20:27:31] C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\trainer\connectors\data_connector.py:442: warnings.py:109
PossibleUserWarning: The dataloader, val_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the
`num_workers` argument` (try 16 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
rank_zero_warn(
WARNING [20:27:31] C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\loops\fit_loop.py:281: PossibleUserWarning: The warnings.py:109
number of training batches (17) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if
you want to see logs for the training epoch.
rank_zero_warn(
INFO [20:27:31] Setting current epoch to 0 train.py:311
INFO [20:27:31] Setting total batch idx to 0 train.py:327
INFO [20:27:31] Setting global step to 0 train.py:317
Epoch 0/9999 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/17 0:00:00 • -:--:-- 0.00it/s v_num: 0[20:27:34] WARNING [20:27:34] C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\torch\_utils.py:776: UserWarning: TypedStorage is deprecated. It warnings.py:109
will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages
directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
[20:27:34] WARNING [20:27:34] C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\torch\_utils.py:776: UserWarning: TypedStorage is deprecated. It warnings.py:109
will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages
directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
[20:27:34] WARNING [20:27:34] C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\torch\_utils.py:776: UserWarning: TypedStorage is deprecated. It warnings.py:109
will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages
directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
[20:27:34] WARNING [20:27:34] C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\torch\_utils.py:776: UserWarning: TypedStorage is deprecated. It warnings.py:109
will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages
directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
Epoch 0/9999 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/17 0:00:03 • -:--:-- 0.00it/s v_num: 0
Traceback (most recent call last):
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\Scripts\svc.exe\__main__.py", line 7, in <module>
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\click\core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\click\core.py", line 1078, in main
rv = self.invoke(ctx)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\click\core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\click\core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\click\core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\so_vits_svc_fork\__main__.py", line 128, in train
train(
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\so_vits_svc_fork\train.py", line 149, in train
trainer.fit(model, datamodule=datamodule)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 532, in fit
call._call_and_handle_interrupt(
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\trainer\call.py", line 43, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 571, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 980, in _run
results = self._run_stage()
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\trainer\trainer.py", line 1023, in _run_stage
self.fit_loop.run()
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\loops\fit_loop.py", line 202, in run
self.advance()
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\loops\fit_loop.py", line 355, in advance
self.epoch_loop.run(self._data_fetcher)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\loops\training_epoch_loop.py", line 133, in run
self.advance(data_fetcher)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\loops\training_epoch_loop.py", line 190, in advance
batch = next(data_fetcher)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\loops\fetchers.py", line 126, in __next__
batch = super().__next__()
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\loops\fetchers.py", line 58, in __next__
batch = next(self.iterator)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\utilities\combined_loader.py", line 285, in __next__
out = next(self._iterator)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\lightning\pytorch\utilities\combined_loader.py", line 65, in __next__
out[i] = next(self.iterators[i])
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\torch\utils\data\dataloader.py", line 633, in __next__
data = self._next_data()
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\torch\utils\data\dataloader.py", line 1345, in _next_data
return self._process_data(data)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\torch\utils\data\dataloader.py", line 1371, in _process_data
data.reraise()
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\torch\_utils.py", line 644, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\torch\utils\data\_utils\worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\torch\utils\data\_utils\fetch.py", line 54, in fetch
return self.collate_fn(data)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\so_vits_svc_fork\dataset.py", line 74, in forward
results[key] = _pad_stack([b[key] for b in batch]).cpu()
File "C:\Users\BranchScope\anaconda3\envs\so-vits-fork\lib\site-packages\so_vits_svc_fork\dataset.py", line 61, in _pad_stack
return torch.stack(x_padded)
RuntimeError: stack expects each tensor to be equal size, but got [1025, 1933] at entry 0 and [2, 80, 1933] at entry 8
To Reproduce
svc train after all those commands to train new models from wav files
Additional context
Let me know if you need more logs.
Version
v4.1.11
Platform
Windows 11 using Anaconda
Code of Conduct
[X] I agree to follow this project's Code of Conduct.
No Duplicate
[X] I have checked existing issues to avoid duplicates.
Describe the bug
To Reproduce
svc train
after all those commands to train new models from wav filesAdditional context
Let me know if you need more logs.
Version
v4.1.11
Platform
Windows 11 using Anaconda
Code of Conduct
No Duplicate