Closed ziyuan666 closed 8 months ago
代码也有变化
代码也有变化The code has also changed.
目前我使用中文特化包,大概G_25000 以后就会变成电音,前面几百几千还没问题,这个问题需要去哪里找出呢 这个是目前正在运行的模型
1.铁定过拟合 2.哪来的DUR? 3.学习率手动调的?
1.铁定过 2.哪来的DUR? 3.学习率手动调的?
1.铁定过拟合 (这个我不懂) 2.哪来的DUR? (这个是https://github.com/fishaudio/Bert-VITS2/releases 下载的中文特化版本) 3.学习率手动调的? 学习率我看了一些资料0.0001改成了0.0005 此外bf16_run": true,
{ "train": { "log_interval": 100, "eval_interval": 100, "seed": 42, "epochs": 10000, "learning_rate": 0.0005, "betas": [ 0.8, 0.99 ], "eps": 1e-09, "batch_size": 10, "bf16_run": true, "lr_decay": 0.99995, "segment_size": 16384, "init_lr_ratio": 1, "warmup_epochs": 0, "c_mel": 45, "c_kl": 1.0, "c_commit": 100, "skip_optimizer": true, "freeze_ZH_bert": false, "freeze_JP_bert": false, "freeze_EN_bert": false, "freeze_emo": false }, "data": { "training_files": "Data/genie/filelists/train.list", "validation_files": "Data/genie/filelists/val.list", "max_wav_value": 32768.0, "sampling_rate": 44100, "filter_length": 2048, "hop_length": 512, "win_length": 2048, "n_mel_channels": 128, "mel_fmin": 0.0, "mel_fmax": null, "add_blank": true, "n_speakers": 1, "cleaned_text": true, "spk2id": { "genie": 0 } }, "model": { "use_spk_conditioned_encoder": true, "use_noise_scaled_mas": true, "use_mel_posterior_encoder": false, "use_duration_discriminator": true, "inter_channels": 192, "hidden_channels": 192, "filter_channels": 768, "n_heads": 2, "n_layers": 6, "kernel_size": 3, "p_dropout": 0.1, "resblock": "1", "resblock_kernel_sizes": [ 3, 7, 11
buff叠满了 不炸就怪了 看看帮助文档吧 此外中文特化底膜没有dur
buff叠满了 不炸就怪了 看看帮助文档吧 此外中文特化底膜没有dur
那我就默认参数+中文特化底模 可以吗
默认当然是最通用的,不会出大问题的
请问一下中文特化包直接复制去pretrained_models就可以了吗,