kan-bayashi / ParallelWaveGAN

Unofficial Parallel WaveGAN (+ MelGAN & Multi-band MelGAN & HiFi-GAN & StyleMelGAN) with Pytorch
https://kan-bayashi.github.io/ParallelWaveGAN/
MIT License
1.54k stars 340 forks source link

If fine-tuning from pre-trained should generator_scheduler_params be updated? #373

Open skol101 opened 2 years ago

skol101 commented 2 years ago

I'm fine tuning Hifigan from 2.5ml steps pretrained model to 3ml steps.

I wonder if this is the way to go by updating milestones?

generator_optimizer_type: Adam
generator_optimizer_params:
    lr: 2.0e-4
    betas: [0.5, 0.9]
    weight_decay: 0.0
generator_scheduler_type: MultiStepLR
generator_scheduler_params:
    gamma: 0.5
    milestones:
        - 2600000
        - 2700000
        - 2800000
        - 2900000
generator_grad_norm: -1
discriminator_optimizer_type: Adam
discriminator_optimizer_params:
    lr: 2.0e-4
    betas: [0.5, 0.9]
    weight_decay: 0.0
discriminator_scheduler_type: MultiStepLR
discriminator_scheduler_params:
    gamma: 0.5
    milestones:
        - 2600000
        - 2700000
        - 2800000
        - 2900000
discriminator_grad_norm: -1