openvpi / DiffSinger

An advanced singing voice synthesis system with high fidelity, expressiveness, controllability and flexibility based on DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism
Apache License 2.0
2.73k stars 288 forks source link

Support for complex LR scheduler configuration #125

Closed hrukalive closed 1 year ago

hrukalive commented 1 year ago

Recursively construct objects if cls is present in sub-arguments. This means that complex schedulers such as SequentialLR str possible. The following is a demo config:

lr_scheduler_args:
  scheduler_cls: torch.optim.lr_scheduler.ChainedScheduler
  schedulers:
  - cls: torch.optim.lr_scheduler.SequentialLR
    schedulers:
    - cls: torch.optim.lr_scheduler.StepLR
      step_size: 10
    - cls: torch.optim.lr_scheduler.CosineAnnealingWarmRestarts
      T_0: 10
    milestones:
    - 10
  - cls: torch.optim.lr_scheduler.SequentialLR
    schedulers:
    - cls: torch.optim.lr_scheduler.ExponentialLR
      gamma: 0.5
    - cls: torch.optim.lr_scheduler.LinearLR
    - cls: torch.optim.lr_scheduler.MultiStepLR
      milestones:
      - 10
      - 20
    milestones:
    - 10
    - 20

Meanwhile, the side effect of supporting the complex configuration is that the LR scheduler state in the checkpoint is deprecated. The new learning rate will be calculated according to the latest configurations before training starts.

Warning:

Nested SequentialLR and ChainedScheduler have unexpected behavior. DO NOT nest them. Also, make sure the scheduler is chainable from the doc before using it in the ChainedScheduler.

yqzhishen commented 1 year ago

There are conflicts after merging #120 into main