NVIDIA / NeMo

A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html
Apache License 2.0
12.05k stars 2.51k forks source link

Maximum LR in ASR YAML is not used correctly #2835

Closed agemagician closed 3 years ago

agemagician commented 3 years ago

Hello,

I am fine-tuning the conformer model, and I have noticed that the defined LR in the Yaml file is not used and another lower LR is actually used during training.

The defined LR is 0.005, but the actual reported LR on wandb is around 0.00000206 . The warm-up rate is used correctly. Only the maximum LR is not correct during training.

Are you performing any kind of changes of the LR based on the number of the GPUs or the accumulate_grad_batches ?

image

# It contains the default values for training a Conformer-CTC ASR model, large size (~120M) with CTC loss and sub-word encoding.

# Architecture and training config:
# Default learning parameters in this config are set for effective batch size of 2K. To train it with smaller effective
# batch sizes, you may need to re-tune the learning parameters or use higher accumulate_grad_batches.
# Here are the recommended configs for different variants of Conformer-CTC, other parameters are the same as in this config file.
# One extra layer (compared to original paper) is added to the medium and large variants to compensate for replacing the LSTM decoder with a linear one.
#
#  +-------------+---------+---------+----------+------------+-----+
#  | Model       | d_model | n_heads | n_layers | time_masks | lr  |
#  +=============+=========+========+===========+============+=====+
#  | Small  (13M)|   176   |    4   |    16     |     5      | 5.0 |
#  +-------------+---------+--------+-----------+------------+-----+
#  | Medium (30M)|   256   |    4   |    18     |     5      | 5.0 |
#  +-------------+---------+--------+-----------+------------+-----+
#  | Large (121M)|   512   |    8   |    18     |     10     | 2.0 |
#  +---------------------------------------------------------------+
#
# If you do not want to train with AMP, you may use weight decay of 0.0 or reduce the number of time maskings to 2
# with time_width=100. It may help when you want to train for fewer epochs and need faster convergence.
# With weight_decay=0.0, learning rate may need to get reduced to 2.0.

# You may find more info about Conformer-CTC here: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#conformer-ctc
# Pre-trained models of Conformer-CTC can be found here: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/results.html
# The checkpoint of the large model trained on LibriSpeech with this recipe can be found here: https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_ctc_large_ls

name: "Conformer-CTC-BPE-Small"

model:
  sample_rate: 16000
  log_prediction: true # enables logging sample predictions in the output during training
  ctc_reduction: 'mean_batch'

  train_ds:
    manifest_filepath: /mnt/local/extra/users/ael/dl/data/en/train_filtered_flac.json
    sample_rate: ${model.sample_rate}
    batch_size: 8 # you may increase batch_size if your memory allows
    shuffle: true
    num_workers: 4
    pin_memory: true
    use_start_end_token: false
    trim_silence: false
    max_duration: 17 # it is set for LibriSpeech, you may need to update it for your dataset
    min_duration: 0.1
    is_tarred: false
    #use_dali: true

  validation_ds:
    manifest_filepath:
    - /home/jho/nemo/github_ahmed/NeMo/tasks/en/weighted_combined1/datasets/librispeech_dev_clean.json
    - /home/jho/nemo/github_ahmed/NeMo/tasks/en/weighted_combined1/datasets/librispeech_dev_other.json
    sample_rate: ${model.sample_rate}
    batch_size: 8 # you may increase batch_size if your memory allows
    shuffle: false
    num_workers: 4
    pin_memory: true
    use_start_end_token: false
    is_tarred: false
    tarred_audio_filepaths: ''
    #use_dali: true

  test_ds:
    manifest_filepath:
    - /home/jho/nemo/github_ahmed/NeMo/tasks/en/weighted_combined1/datasets/librispeech_dev_clean.json
    - /home/jho/nemo/github_ahmed/NeMo/tasks/en/weighted_combined1/datasets/librispeech_dev_other.json
    sample_rate: ${model.sample_rate}
    batch_size: 8 # you may increase batch_size if your memory allows
    shuffle: false
    num_workers: 4
    pin_memory: true
    use_start_end_token: false
    is_tarred: false
    tarred_audio_filepaths: ''
    #use_dali: true

  # recommend small vocab size of 128 or 256 when using 4x sub-sampling
  # you may find more detail on how to train a tokenizer at: /scripts/tokenizers/process_asr_text_tokenizer.py
  tokenizer:
    dir: /home/ael/Scripts/nemo/training/pretrained_models/stt_en_conformer_ctc_small/  # path to directory which contains either tokenizer.model (bpe) or vocab.txt (wpe)
    type: bpe  # Can be either bpe (SentencePiece tokenizer) or wpe (WordPiece tokenizer)

  preprocessor:
    _target_: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor
    sample_rate: ${model.sample_rate}
    normalize: "per_feature"
    window_size: 0.025
    window_stride: 0.01
    window: "hann"
    features: 80
    n_fft: 512
    log: true
    frame_splicing: 1
    dither: 0.00001
    pad_to: 0
    pad_value: 0.0

  spec_augment:
    _target_: nemo.collections.asr.modules.SpectrogramAugmentation
    freq_masks: 2 # set to zero to disable it
    # you may use lower time_masks for smaller models to have a faster convergence
    time_masks: 2 # set to zero to disable it
    freq_width: 27
    time_width: 0.05

  encoder:
    _target_: nemo.collections.asr.modules.ConformerEncoder
    feat_in: ${model.preprocessor.features}
    feat_out: -1 # you may set it if you need different output size other than the default d_model
    n_layers: 16
    d_model: 176

    # Sub-sampling params
    subsampling: striding # vggnet or striding, vggnet may give better results but needs more memory
    subsampling_factor: 4 # must be power of 2
    subsampling_conv_channels: -1 # -1 sets it to d_model

    # Feed forward module's params
    ff_expansion_factor: 4

    # Multi-headed Attention Module's params
    self_attention_model: rel_pos # rel_pos or abs_pos
    n_heads: 4 # may need to be lower for smaller d_models
    # [left, right] specifies the number of steps to be seen from left and right of each step in self-attention
    att_context_size: [-1, -1] # -1 means unlimited context
    xscaling: true # scales up the input embeddings by sqrt(d_model)
    untie_biases: true # unties the biases of the TransformerXL layers
    pos_emb_max_len: 5000

    # Convolution module's params
    conv_kernel_size: 31

    ### regularization
    dropout: 0.1 # The dropout used in most of the Conformer Modules
    dropout_emb: 0.0 # The dropout used for embeddings
    dropout_att: 0.1 # The dropout for multi-headed attention modules

  decoder:
    _target_: nemo.collections.asr.modules.ConvASRDecoder
    feat_in: null
    num_classes: -1
    vocabulary: []

  optim:
    name: adamw
    lr: 0.005
    # optimizer arguments
    betas: [0.9, 0.98]
    # less necessity for weight_decay as we already have large augmentations with SpecAug
    # you may need weight_decay for large models, stable AMP training, small datasets, or when lower augmentations are used
    # weight decay of 0.0 with lr of 2.0 also works fine
    weight_decay: 0.0

    # scheduler setup
    sched:
      name: NoamAnnealing
      d_model: ${model.encoder.d_model}
      # scheduler config override
      warmup_steps: null
      warmup_ratio: 0.05
      min_lr: 1e-6

trainer:
  gpus: -1 # number of GPUs, -1 would use all available GPUs
  num_nodes: 1
  max_epochs: 15
  max_steps: null # computed at runtime if not set
  val_check_interval: 0.5 # Set to 0.25 to check 4 times per epoch, or an int for number of iterations
  accelerator: ddp
  accumulate_grad_batches: 8
  gradient_clip_val: 0.0
  amp_level: O0 # O1/O2 for mixed precision
  precision: 32 # Should be set to 16 for O1 and O2 to enable the AMP.
  log_every_n_steps: 100  # Interval of logging.
  progress_bar_refresh_rate: 10
  resume_from_checkpoint: null # The path to a checkpoint file to continue the training, restores the whole state including the epoch, step, LR schedulers, apex, etc.
  num_sanity_val_steps: 1 # number of steps to perform validation steps for sanity check the validation process before starting the training, setting to 0 disables it
  check_val_every_n_epoch: 1 # number of evaluations on validation every n epochs
  sync_batchnorm: true
  checkpoint_callback: false  # Provided by exp_manager
  logger: false  # Provided by exp_manager
  plugins: ddp_sharded

exp_manager:
  exp_dir:  /mnt/local/extra/users/ael/dl/data/nemo/models/en/
  name: ${name}
  create_tensorboard_logger: true
  create_checkpoint_callback: true
  checkpoint_callback_params:
    # in case of multiple validation sets, first one is used
    monitor: "val_wer"
    mode: "min"
    save_top_k: 3

  # you need to set these two to True to continue the training
  resume_if_exists: true
  resume_ignore_no_checkpoint: true

  # You may use this section to create a W&B logger
  create_wandb_logger: true
  wandb_logger_kwargs:
    name: conformer-small-bpe-en-balanced-16k
    project: asr-en

hydra:
  run:
    dir: /mnt/local/extra/users/ael/dl/data/nemo/models/en/
titu1994 commented 3 years ago

For the Noam scheduler, LR is a scalar multiplier to the LR determined by the Noam scheduler, NOT the actual LR itself.

That is why you will note that the config had high values such as 2 and 5 for the LR - it scales the Noam LR by 2x or 5x.

agemagician commented 3 years ago

Aha, thanks for the explanation. In this case, we should use a much higher learning rate for fine-tuning.

Thanks again.

tareqalmuntasir7 commented 2 years ago

@titu1994 so if the pretrained model has LR 2.0, what should be the optimal value for finetuning?

titu1994 commented 2 years ago

We normally fine-tune with 1/5 to 1/10 the initial learning rate when the tokenizer is not changed (language and vocab remain the same) and only domain of speech has shifted.

If you are replacing the decoder, or training on another language, you should use the pretraining LR itself, and just use the loaded checkpoint as a good initialization.

evilc3 commented 2 years ago

We normally fine-tune with 1/5 to 1/10 the initial learning rate when the tokenizer is not changed (language and vocab remain the same) and only domain of speech has shifted.

If you are replacing the decoder, or training on another language, you should use the pretraining LR itself, and just use the loaded checkpoint as a good initialization.

Do we need to use the scheduler while training? As its never used in the ASR_CTC_Language_Finetuning notebook.

this needs to be called : asr_model.set_trainer(trainer) before data setup and optimizer setup step to use scheduler. which not the case in the notebooks.

My use case is second type where I replace the tokenizer (but same language).

The following params give me NAN loss after it reaches abt 40% of epoch 0. (no problem with data I checked) point at which it gives me nan, there rapid change in learning rate. by noam scheduler.

learning rate = 1.0 batch_size = 64 (with grad accum == 4) warm_up_steps = 10000 max_epochs = 100 total_steps > 1 lakh

But the following params hv worked fine. still using same scheduler settings.

learning rate = 0.1 batch_size = 128 (with grad accum == 4) warm_up_steps = 10000 max_epochs = 100 total_steps > 1 lakh

but i feel that the scheduler as reduced the lr to much and now its not earning. Are there any methods to verify this.

titu1994 commented 2 years ago

We use the default scheduler inside the ASR CTC finetuning notebook, and yes it is always required since most models will not converge otherwise. Higher batch size has more stable updates, but it's not guaranteed to not have nans. I assume you are using mixed precision - that is not advised with Conformers. High lr + fp16 will cause overflow in attention matrix and cause nan gradients.

titu1994 commented 2 years ago

Please move this question to it's own issue, these are not relevant to the original thread.

evilc3 commented 2 years ago

ok, I opened an issue here https://github.com/NVIDIA/NeMo/issues/4183