babysor / MockingBird

🚀AI拟声: 5秒内克隆您的声音并生成任意语音内容 Clone a voice in 5 seconds to generate arbitrary speech in real-time
Other
34.88k stars 5.18k forks source link

训练到 640k就自动停止了,从新开始也会自动停止 #840

Closed ttkrpink closed 1 year ago

ttkrpink commented 1 year ago

下面是开始训练的信息。程序跑了几天,跑到640k以后就自动停止了。从新开始就出现下面的信息,然后又自动停止了。 是需要在哪里改 epoch 总数吗?

python synthesizer_train.py lu ./dataset/SV2TTS/synthesizer Arguments: run_id: lu syn_dir: ./dataset/SV2TTS/synthesizer models_dir: synthesizer/saved_models/ save_every: 1000 backup_every: 25000 log_every: 200 force_restart: False hparams:

Checkpoint path: synthesizer/saved_models/lu/lu.pt Loading training data from: dataset/SV2TTS/synthesizer/train.txt Using model: Tacotron Using device: cuda

Initialising Tacotron Model...

\Loading the json with %s {'sample_rate': 16000, 'n_fft': 800, 'num_mels': 80, 'hop_size': 200, 'win_size': 800, 'fmin': 55, 'min_level_db': -100, 'ref_level_db': 20, 'max_abs_value': 4.0, 'preemphasis': 0.97, 'preemphasize': True, 'tts_embed_dims': 512, 'tts_encoder_dims': 256, 'tts_decoder_dims': 128, 'tts_postnet_dims': 512, 'tts_encoder_K': 5, 'tts_lstm_dims': 1024, 'tts_postnet_K': 5, 'tts_num_highways': 4, 'tts_dropout': 0.5, 'tts_cleaner_names': ['basic_cleaners'], 'tts_stop_threshold': -3.4, 'tts_schedule': [[2, 0.001, 10000, 24], [2, 0.0005, 15000, 24], [2, 0.0002, 20000, 24], [2, 0.0001, 30000, 24], [2, 5e-05, 40000, 24], [2, 1e-05, 60000, 24], [2, 5e-06, 160000, 24], [2, 3e-06, 320000, 24], [2, 1e-06, 640000, 24]], 'tts_clip_grad_norm': 1.0, 'tts_eval_interval': 500, 'tts_eval_num_samples': 1, 'tts_finetune_layers': [], 'max_mel_frames': 900, 'rescale': True, 'rescaling_max': 0.9, 'synthesis_batch_size': 16, 'signal_normalization': True, 'power': 1.5, 'griffin_lim_iters': 60, 'fmax': 7600, 'allow_clipping_in_normalization': True, 'clip_mels_length': True, 'use_lws': False, 'symmetric_mels': True, 'trim_silence': True, 'speaker_embedding_size': 256, 'silence_min_duration_split': 0.4, 'utterance_min_duration': 1.6, 'use_gst': True, 'use_ser_for_gst': True} Trainable Parameters: 0.000M

Loading weights at synthesizer/saved_models/lu/lu.pt Tacotron weights loaded from step 640000 Using inputs from: dataset/SV2TTS/synthesizer/train.txt dataset/SV2TTS/synthesizer/mels dataset/SV2TTS/synthesizer/embeds Found 3012 samples

VERT2022 commented 1 year ago

修改你python路径下面的{python_install}\Python\Python39\Lib\site-packages\models\synthesizer\hparams.py文件: tts_schedule = [(2, 1e-3, 10_000, 192), # Progressive training schedule (2, 5e-4, 15_000, 192), # (r, lr, step, batch_size) (2, 2e-4, 20_000, 192), # (r, lr, step, batch_size) (2, 1e-4, 30_000, 192), # (2, 5e-5, 40_000, 192), # (2, 1e-5, 60_000, 192), # (2, 5e-6, 160_000, 192), # r = reduction factor (# of mel frames (2, 3e-6, 320_000, 192), # synthesized for each decoder iteration) (2, 1e-6, 640_000, 192)], # lr = learning rate 最后一行640_000就是最大的训练任务,可以在后面加2行: (2, 1e-6, 640_000, 192), (2, 5e-7, 1_280_000, 192),
(2, 1e-7, 2_560_000, 192)], # lr = learning rate 变成上面这样子,那个学习率自己改一下测试,我只是根据上面规律修改的,那个192不用管,那是按我显卡修改的参数 我这是WINDOWS系统的路径,如果是LINUX,可能是目录下models/synthesizer/hparams.py

ttkrpink commented 1 year ago

Wow, 太感谢啦。通俗易懂。