YYuX-1145 / Bert-VITS2-Integration-package

vits2 backbone with bert
https://www.bilibili.com/video/BV13p4y1d7v9
GNU Affero General Public License v3.0
332 stars 30 forks source link

在算力云上运行 最后一步训练出错 #21

Closed js1667 closed 1 year ago

js1667 commented 1 year ago

File "train_ms.py", line 58 shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') ^ IndentationError: unexpected indent root@autodl-container-9e2911833c-f68eb341:~/autodl-tmp/Bert-VITS2-Integration-Package# python train_ms.py -c ./configs/config.json INFO:OUTPUT_MODEL:{'train': {'log_interval': 10, 'eval_interval': 100, 'seed': 52, 'epochs': 1000, 'learning_rate': 0.00015, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 12, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 16384, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0}, 'data': {'use_mel_posterior_encoder': False, 'training_files': 'filelists/train.list', 'validation_files': 'filelists/val.list', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 128, 'mel_fmin': 0.0, 'mel_fmax': None, 'add_blank': True, 'n_speakers': 1, 'cleaned_text': True, 'spk2id': {'acetaffy': 0}}, 'model': {'use_spk_conditioned_encoder': True, 'use_noise_scaled_mas': True, 'use_mel_posterior_encoder': False, 'use_duration_discriminator': True, 'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 8, 2, 2], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256}, 'model_dir': './logs/./OUTPUT_MODEL', 'cont': False} WARNING:OUTPUT_MODEL:/root/autodl-tmp/Bert-VITS2-Integration-Package is not a git repository, therefore hash value comparison will be ignored. INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0 INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes. skipped: 7 , total: 1125 skipped: 0 , total: 4 Using noise scaled MAS for VITS2 Using duration discriminator for VITS2 256 2 256 2 256 2 256 2 256 2 ./logs/./OUTPUT_MODEL/DUR_0.pth error, norm_1.gamma is not in the checkpoint error, norm_1.beta is not in the checkpoint error, norm_2.gamma is not in the checkpoint error, norm_2.beta is not in the checkpoint error, cond.weight is not in the checkpoint error, cond.bias is not in the checkpoint load INFO:OUTPUT_MODEL:Loaded checkpoint './logs/./OUTPUT_MODEL/DUR_0.pth' (iteration 694) ./logs/./OUTPUT_MODEL/G_0.pth error, emb_g.weight is not in the checkpoint load INFO:OUTPUT_MODEL:Loaded checkpoint './logs/./OUTPUT_MODEL/G_0.pth' (iteration 0) ./logs/./OUTPUT_MODEL/D_0.pth load INFO:OUTPUT_MODEL:Loaded checkpoint './logs/./OUTPUT_MODEL/D_0.pth' (iteration 0) 0it [00:02, ?it/s] Traceback (most recent call last): File "train_ms.py", line 402, in main() File "train_ms.py", line 60, in main mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 239, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes while not context.join(): File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 160, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 0 terminated with the following error: Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap fn(i, *args) File "/root/autodl-tmp/Bert-VITS2-Integration-Package/train_ms.py", line 193, in run train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) File "/root/autodl-tmp/Bert-VITS2-Integration-Package/train_ms.py", line 232, in train_and_evaluate mel = spec_to_mel_torch( File "/root/autodl-tmp/Bert-VITS2-Integration-Package/mel_processing.py", line 78, in spec_to_mel_torch mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) TypeError: mel() takes 0 positional arguments but 5 were given

YYuX-1145 commented 1 year ago

云训练请自行解决问题