NVIDIA / NeMo

A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html
Apache License 2.0
12.23k stars 2.54k forks source link

ASR Validation step predicts "???" when adding noise augmentation #7145

Closed gabitza-tech closed 1 year ago

gabitza-tech commented 1 year ago

Describe the bug

I am using nemo docker v23.04 , but I also observe the following behavior in a conda environment too.

I am training several conformer models ( same behavior using fastconformer transducer or conformer transducer) with several perturbations ( speed and noise from musan) and at the end of the epoch, when validation is performed, i observe the following behavior:

[NeMo I 2023-08-01 09:14:24 rnnt_wer_bpe:395] reference :el își joacă postul în meciul cu sportul studențesc
[NeMo I 2023-08-01 09:14:24 rnnt_wer_bpe:396] predicted : ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇
⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇
⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇
⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇
⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇
⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇
⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇
⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇  ⁇

Epoch 0: 100%|██████████████████████████████| 16235/16235 [2:21:02<00:00,  1.92it/s, loss=5.14, v_num=2]Epoch 0, global step 503: 'val_wer' reached 57.20575 (best 57.20575), saving model to '/mnt/zfs-mirror-02/homes/gpirlogeanu/git_repos/nemoasr/examples/asr/train_dir/nemo_experiments/fastconformer_BAS/checkpoints/fastconformer_BAS--val_wer=57.2057-epoch=0.ckpt' as top 5

If I don't use augmentations, it shows the predictions and works ok. The val_loss is NaN and the val_wer displays a random value.

I think it might be a problem with my noise manifest, but I am unsure why, when I was augmenting VAD audios, I remember making the segments equal to 0.63s, but I thought in case of ASR, that perturb automatically cuts the noise audios. It looks like this:

{"audio_filepath": "/home/gpirlogeanu/musan/noise/all_noise/noise-free-sound-0000.wav", "duration": 17.65, "label": "noise", "text": "_", "offset": 0.0}
{"audio_filepath": "/home/gpirlogeanu/musan/noise/all_noise/noise-free-sound-0001.wav", "duration": 40.53, "label": "noise", "text": "_", "offset": 0.0}
{"audio_filepath": "/home/gpirlogeanu/musan/noise/all_noise/noise-free-sound-0002.wav", "duration": 71.21, "label": "noise", "text": "_", "offset": 0.0}
{"audio_filepath": "/home/gpirlogeanu/musan/noise/all_noise/noise-free-sound-0003.wav", "duration": 14.23, "label": "noise", "text": "_", "offset": 0.0}
{"audio_filepath": "/home/gpirlogeanu/musan/noise/all_noise/noise-free-sound-0004.wav", "duration": 2.98, "label": "noise", "text": "_", "offset": 0.0}
{"audio_filepath": "/home/gpirlogeanu/musan/noise/all_noise/noise-free-sound-0005.wav", "duration": 21.18, "label": "noise", "text": "_", "offset": 0.0}
{"audio_filepath": "/home/gpirlogeanu/musan/noise/all_noise/noise-free-sound-0006.wav", "duration": 7.77, "label": "noise", "text": "_", "offset": 0.0}
{"audio_filepath": "/home/gpirlogeanu/musan/noise/all_noise/noise-free-sound-0007.wav", "duration": 16.61, "label": "noise", "text": "_", "offset": 0.0}

This is my config file:

# It contains the default values for training a Fast Conformer-Transducer ASR model, large size (~120M) with Transducer loss and sub-word encoding.

# You may find more info about FastConformer here: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#fast-conformer

# We suggest to use trainer.precision=bf16 for GPUs which support it otherwise trainer.precision=16 is recommended.
# Using bf16 or 16 would make it possible to double the batch size and speedup training/inference. If fp16 is not stable and model diverges after some epochs, you may use fp32.
# Here are the suggested batch size per GPU for each precision and memory sizes:

#  +-----------+------------+------------+
#  | Precision | GPU Memory | Batch Size |
#  +===========+============+============+
#  | 32        |    16GB    |     16     |
#  |           |    32GB    |     32     |
#  |           |    80GB    |     64     |
#  +-----------+------------+------------+
#  | fp16 or   |    16GB    |     32     |
#  | bf16      |    32GB    |     64     |
#  |           |    80GB    |     128    |
#  +-----------+------------+------------+
# Here are the recommended configs for different variants of FastConformer-Transducer-BPE, other parameters are the same as in this config file.
#
#  +--------------+---------+---------+----------+----------------+--------------+--------------------------+-----------------+------------+
#  | Model        | d_model | n_heads | n_layers |conv_kernel_size| weight_decay | pred_hidden/joint_hidden | pred_rnn_layers |  xscaling  |
#  +==============+=========+========+===========+================+==============+==========================+=================+============+
#  | Small  (14M) |   176   |    4   |    16     |        9       |     0.0      |           320            |        1        |    True    |
#  +--------------+---------+--------+-----------+----------------+--------------+--------------------------+-----------------+------------+
#  | Medium (32M) |   256   |    4   |    16     |        9       |     1e-3     |           640            |        1        |    True    |
#  +--------------+---------+--------+-----------+----------------+--------------+--------------------------+-----------------+------------+
#  | Large (120M) |   512   |    8   |    17     |        9       |     1e-3     |           640            |        1        |    True    |
#  +--------------+---------+--------+-----------+----------------+--------------+--------------------------+-----------------+------------+
#  | XLarge (616M)|   1024  |    8   |    24     |        9       |     1e-3     |           640            |        2        |    True    |
#  +--------------+---------+--------+-----------+----------------+--------------+--------------------------+-----------------+------------+
#  | XXLarge(1.2B)|   1024  |    8   |    42     |        5       |     1e-3     |           640            |        2        |    False   |
#  +--------------------------------------------------------------+--------------+--------------------------+-----------------+------------+ 

# Note:  They are based on the assumption of max_duration of 20. If you have longer or shorter max_duration, then batch sizes may need to get updated accordingly.

# Default learning parameters in this config are set for global batch size of 2K while you may use lower values.
# To increase the global batch size with limited number of GPUs, you may use higher accumulate_grad_batches.
# However accumulate_grad_batches is better to be avoided as long as the global batch size is large enough and training is stable.

name: "FastConformer-Transducer-BPE"
init_from_nemo_model: "models/stt_en_fastconformer_transducer_large.nemo"
init_strict: false

model:
  sample_rate: 16000
  compute_eval_loss: true # eval samples can be very long and exhaust memory. Disable computation of transducer loss during validation/testing with this flag.
  log_prediction: true # enables logging sample predictions in the output during training
  rnnt_reduction: 'mean_volume'
  skip_nan_grad: false

  model_defaults:
    enc_hidden: ${model.encoder.d_model}
    pred_hidden: 640
    joint_hidden: 640

  train_ds:
    manifest_filepath: "manifests/RSC_train.json" 
    sample_rate: ${model.sample_rate}
    batch_size: 32 # you may increase batch_size if your memory allows
    shuffle: true
    num_workers: 8
    pin_memory: true
    max_duration: 20 # it is set for LibriSpeech, you may need to update it for your dataset
    min_duration: 0.1
    # tarred datasets
    is_tarred: false
    tarred_audio_filepaths: null
    shuffle_n: 2048
    # bucketing params
    bucketing_strategy: "fully_randomized"
    bucketing_batch_size: null
    augmentor:
      speed:
        prob: 0.4
        sr: 16000
        resample_type: "kaiser_best"
        min_speed_rate: 0.9
        max_speed_rate: 1.1
      noise:
        prob: 0.4
        manifest_path: "manifests/noise_musan.json"
        min_snr_db: 10
        max_snr_db: 30
        rng: null
        orig_sr: 16000

  validation_ds:
    manifest_filepath: "manifests/RSC_eval.json"
    sample_rate: ${model.sample_rate}
    batch_size: 16
    shuffle: false
    use_start_end_token: false
    num_workers: 8
    pin_memory: true

  test_ds:
    manifest_filepath: null
    sample_rate: ${model.sample_rate}
    batch_size: 16
    shuffle: false
    use_start_end_token: false
    num_workers: 8
    pin_memory: true

  # You may find more detail on how to train a tokenizer at: /scripts/tokenizers/process_asr_text_tokenizer.py
  tokenizer:
    dir:  "tokenizers/5_normal_BAS+news_unigram/tokenizer_spe_unigram_v1024_max_5" # path to directory which contains either tokenizer.model (bpe) or vocab.txt (for wpe)
    type: bpe  # Can be either bpe (SentencePiece tokenizer) or wpe (WordPiece tokenizer)

  preprocessor:
    _target_: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor
    sample_rate: ${model.sample_rate}
    normalize: "per_feature"
    window_size: 0.025
    window_stride: 0.01
    window: "hann"
    features: 80
    n_fft: 512
    frame_splicing: 1
    dither: 0.00001
    pad_to: 0

  spec_augment:
    _target_: nemo.collections.asr.modules.SpectrogramAugmentation
    freq_masks: 2 # set to zero to disable it
    time_masks: 10 # set to zero to disable it
    freq_width: 27
    time_width: 0.05

  encoder:
    _target_: nemo.collections.asr.modules.ConformerEncoder
    feat_in: ${model.preprocessor.features}
    feat_out: -1 # you may set it if you need different output size other than the default d_model
    n_layers: 17
    d_model: 512

    # Sub-sampling parameters
    subsampling: dw_striding # vggnet, striding, stacking or stacking_norm, dw_striding
    subsampling_factor: 8 # must be power of 2 for striding and vggnet
    subsampling_conv_channels: 256 # set to -1 to make it equal to the d_model
    causal_downsampling: false

    # Reduction parameters: Can be used to add another subsampling layer at a given position.
    # Having a 2x reduction will speedup the training and inference speech while keeping similar WER.
    # Adding it at the end will give the best WER while adding it at the beginning will give the best speedup.
    reduction: null # pooling, striding, or null
    reduction_position: null # Encoder block index or -1 for subsampling at the end of encoder
    reduction_factor: 1

    # Feed forward module's params
    ff_expansion_factor: 4

    # Multi-headed Attention Module's params
    self_attention_model: rel_pos # rel_pos or abs_pos
    n_heads: 8 # may need to be lower for smaller d_models
    # [left, right] specifies the number of steps to be seen from left and right of each step in self-attention
    att_context_size: [-1, -1] # -1 means unlimited context
    att_context_style: regular # regular or chunked_limited
    xscaling: true # scales up the input embeddings by sqrt(d_model)
    untie_biases: true # unties the biases of the TransformerXL layers
    pos_emb_max_len: 5000

    # Convolution module's params
    conv_kernel_size: 9
    conv_norm_type: 'batch_norm' # batch_norm or layer_norm or groupnormN (N specifies the number of groups)
    # conv_context_size can be"causal" or a list of two integers while conv_context_size[0]+conv_context_size[1]+1==conv_kernel_size
    # null means [(kernel_size-1)//2, (kernel_size-1)//2], and 'causal' means [(kernel_size-1), 0]
    conv_context_size: null

    ### regularization
    dropout: 0.1 # The dropout used in most of the Conformer Modules
    dropout_pre_encoder: 0.1 # The dropout used before the encoder
    dropout_emb: 0.0 # The dropout used for embeddings
    dropout_att: 0.1 # The dropout for multi-headed attention modules

    # set to non-zero to enable stochastic depth
    stochastic_depth_drop_prob: 0.0
    stochastic_depth_mode: linear  # linear or uniform
    stochastic_depth_start_layer: 1

  decoder:
    _target_: nemo.collections.asr.modules.RNNTDecoder
    normalization_mode: null # Currently only null is supported for export.
    random_state_sampling: false # Random state sampling: https://arxiv.org/pdf/1910.11455.pdf
    blank_as_pad: true # This flag must be set in order to support exporting of RNNT models + efficient inference.

    prednet:
      pred_hidden: ${model.model_defaults.pred_hidden}
      pred_rnn_layers: 1
      t_max: null
      dropout: 0.2

  # if a large vocabulary size is desired, you may wish to use SampleRNNTJoint module
  # _target_: nemo.collections.asr.modules.SampledRNNTJoint
  # n_samples: 500 # Specifies the minimum number of tokens to sample from the vocabulary space, excluding
  # the RNNT blank token. If a given value is larger than the entire vocabulary size, then the full
  # vocabulary will be used
  joint:
    _target_: nemo.collections.asr.modules.RNNTJoint
    log_softmax: null  # 'null' would set it automatically according to CPU/GPU device
    preserve_memory: false  # dramatically slows down training, but might preserve some memory

    # Fuses the computation of prediction net + joint net + loss + WER calculation
    # to be run on sub-batches of size `fused_batch_size`.
    # When this flag is set to true, consider the `batch_size` of *_ds to be just `encoder` batch size.
    # `fused_batch_size` is the actual batch size of the prediction net, joint net and transducer loss.
    # Using small values here will preserve a lot of memory during training, but will make training slower as well.
    # An optimal ratio of fused_batch_size : *_ds.batch_size is 1:1.
    # However, to preserve memory, this ratio can be 1:8 or even 1:16.
    # Extreme case of 1:B (i.e. fused_batch_size=1) should be avoided as training speed would be very slow.
    fuse_loss_wer: true
    fused_batch_size: 4

    jointnet:
      joint_hidden: ${model.model_defaults.joint_hidden}
      activation: "relu"
      dropout: 0.2

  decoding:
    strategy: "greedy_batch" # can be greedy, greedy_batch, beam, tsd, alsd.

    # greedy strategy config
    greedy:
      max_symbols: 10

    # beam strategy config
    beam:
      beam_size: 2
      return_best_hypothesis: False
      score_norm: true
      tsd_max_sym_exp: 50  # for Time Synchronous Decoding
      alsd_max_target_len: 2.0  # for Alignment-Length Synchronous Decoding

  loss:
    loss_name: "default"

    warprnnt_numba_kwargs:
      # FastEmit regularization: https://arxiv.org/abs/2010.11148
      # You may enable FastEmit to reduce the latency of the model for streaming
      fastemit_lambda: 0.0  # Recommended values to be in range [1e-4, 1e-2], 0.001 is a good start.
      clamp: -1.0  # if > 0, applies gradient clamping in range [-clamp, clamp] for the joint tensor only.

  optim:
    name: adamw
    lr: 5e-3
    # optimizer arguments
    betas: [0.9, 0.98]
    weight_decay: 1e-3

    # scheduler setup
    sched:
      name: CosineAnnealing
      # scheduler config override
      warmup_steps: 15000
      warmup_ratio: null
      min_lr: 5e-4

trainer:
  devices: -1 # number of GPUs, -1 would use all available GPUs
  num_nodes: 1
  max_epochs: 500
  max_steps: -1 # computed at runtime if not set
  val_check_interval: 1.0 # Set to 0.25 to check 4 times per epoch, or an int for number of iterations
  accelerator: auto
  strategy: ddp
  accumulate_grad_batches: 16
  gradient_clip_val: 0.0
  precision: bf16 # 16, 32, or bf16
  log_every_n_steps: 100  # Interval of logging.
  enable_progress_bar: True
  resume_from_checkpoint: null # The path to a checkpoint file to continue the training, restores the whole state including the epoch, step, LR schedulers, apex, etc.
  num_sanity_val_steps: 0 # number of steps to perform validation steps for sanity check the validation process before starting the training, setting to 0 disables it
  check_val_every_n_epoch: 1 # number of evaluations on validation every n epochs
  sync_batchnorm: true
  enable_checkpointing: False  # Provided by exp_manager
  logger: false  # Provided by exp_manager
  benchmark: false # needs to be false for models with variable-length speech input as it slows down training

exp_manager:
  exp_dir: fastconformer
  name: ${name}
  create_tensorboard_logger: true
  create_checkpoint_callback: true
  checkpoint_callback_params:
    # in case of multiple validation sets, first one is used
    monitor: "val_wer"
    mode: "min"
    save_top_k: 5
    always_save_nemo: True # saves the checkpoints as nemo files instead of PTL checkpoints
  resume_if_exists: True
  resume_ignore_no_checkpoint: True

  create_wandb_logger: false
  wandb_logger_kwargs:
    name: null
    project: null

Thanks in advance!

nabil6391 commented 1 year ago

@gabitza-tech did you find any solution to your error?

gabitza-tech commented 1 year ago

Hi @nabil6391 not exactly, but I don't face it as often anymore. It might have been caused by the learning rate and low precision (16bit). From what I experimented, when using 16bf or 32 bit precision, I can't reproduce the error. When using 16bit precision, training is usually not very stable..

BR, Gabi

nabil6391 commented 1 year ago

Thank you, Your suggestion might just help solve the same issue I am facing. Will try wtih "bf16-mixed"