NVIDIA / NeMo

A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html
Apache License 2.0
12.11k stars 2.52k forks source link

Problem with fine tuning fastconformer_hybrid_large_streaming_multi model on another language #8256

Closed traidn closed 7 months ago

traidn commented 9 months ago

Hello! I want to train hybrid model like that for russian language. First of all I tried to train it from scratch on Golos dataset (~1100 hours), but I encountered problem with bad converge (like in this issue). Even after 49 epoch WER was 1.0. After that I decided to try to use pretrained english model and fine tune it for new language. I created new tokenizer for my dataset and insert path to it in config file. I used almost default config from model card:

name: "FastConformer-Hybrid-Transducer-CTC-BPE-Streaming-FineTuned-on-English"

model:
  sample_rate: 16000
  compute_eval_loss: false # eval samples can be very long and exhaust memory. Disable computation of transducer loss during validation/testing with this flag.
  log_prediction: true # enables logging sample predictions in the output during training
  skip_nan_grad: false

  model_defaults:
    enc_hidden: ${model.encoder.d_model}
    pred_hidden: 640
    joint_hidden: 640

  train_ds:
    manifest_filepath: "/home/user/Downloads/Golos_dataset/train/golos_manifest.jsonl"
    sample_rate: ${model.sample_rate}
    batch_size: 12 # you may increase batch_size if your memory allows
    shuffle: true
    num_workers: 8
    pin_memory: true
    max_duration: 20 # you may need to update it for your dataset
    min_duration: 0.1
    # tarred datasets
    is_tarred: false
    tarred_audio_filepaths: null
    shuffle_n: 2048
    # bucketing params
    bucketing_strategy: "synced_randomized"
    bucketing_batch_size: null

  validation_ds:
    manifest_filepath: "/home/user/Downloads/Golos_dataset/train/1hour.jsonl"
    sample_rate: ${model.sample_rate}
    batch_size: 12
    shuffle: false
    use_start_end_token: false
    num_workers: 8
    pin_memory: true

  test_ds:
    manifest_filepath: "/home/user/Downloads/Golos_dataset/train/1hour.jsonl"
    sample_rate: ${model.sample_rate}
    batch_size: 16
    shuffle: false
    use_start_end_token: false
    num_workers: 8
    pin_memory: true

  tokenizer:
    dir: "/home/user/PycharmProjects/NEMO_Project/fast_conformer/unigram_tokenizer/tokenizer_spe_unigram_v1024"  # path to directory which contains either tokenizer.model (bpe) or vocab.txt (for wpe)
    type: bpe  # Can be either bpe (SentencePiece tokenizer) or wpe (WordPiece tokenizer)

  preprocessor:
    _target_: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor
    sample_rate: ${model.sample_rate}
    normalize: "NA" # No normalization for mel-spectogram makes streaming easier
    window_size: 0.025
    window_stride: 0.01
    window: "hann"
    features: 80
    n_fft: 512
    frame_splicing: 1
    dither: 0.00001
    pad_to: 0

  spec_augment:
    _target_: nemo.collections.asr.modules.SpectrogramAugmentation
    freq_masks: 2 # set to zero to disable it
    time_masks: 10 # set to zero to disable it
    freq_width: 27
    time_width: 0.05

  encoder:
    _target_: nemo.collections.asr.modules.ConformerEncoder
    feat_in: ${model.preprocessor.features}
    feat_out: -1 # you may set it if you need different output size other than the default d_model
    n_layers: 17
    d_model: 512

    # Sub-sampling parameters
    subsampling: dw_striding # vggnet, striding, stacking or stacking_norm, dw_striding
    subsampling_factor: 8 # must be power of 2 for striding and vggnet
    subsampling_conv_channels: 256 # set to -1 to make it equal to the d_model
    causal_downsampling: true

    # Feed forward module's params
    ff_expansion_factor: 4

    # Multi-headed Attention Module's params
    self_attention_model: rel_pos # rel_pos or abs_pos
    n_heads: 8 # may need to be lower for smaller d_models

    # [left, right] specifies the number of steps to be seen from left and right of each step in self-attention
    # for att_context_style=regular, the right context is recommended to be a small number around 0 to 3 as multiple-layers may increase the effective right context too large
    # for att_context_style=chunked_limited, the left context need to be dividable by the right context plus one
    # look-ahead(secs) = att_context_size[1]*subsampling_factor*window_stride, example: 13*8*0.01=1.04s

    # For multi-lookahead models, you may specify a list of context sizes. During the training, different context sizes would be used randomly with the distribution specified by att_context_probs.
    # The first item in the list would be the default during test/validation/inference.
    # An example of settings for multi-lookahead:
    #    att_context_size: [[70,13],[70,6],[70,1],[70,0]]
    #    att_context_probs: [0.25, 0.25, 0.25, 0.25, 0.25]
    att_context_size: [70, 13] # -1 means unlimited context
    att_context_style: chunked_limited # regular or chunked_limited
    att_context_probs: null

    xscaling: true # scales up the input embeddings by sqrt(d_model)
    pos_emb_max_len: 5000

    # Convolution module's params
    conv_kernel_size: 9
    conv_norm_type: 'layer_norm' # batch_norm or layer_norm or groupnormN (N specifies the number of groups)

    # conv_context_size can be"causal" or a list of two integers while conv_context_size[0]+conv_context_size[1]+1==conv_kernel_size
    # null means [(kernel_size-1)//2, (kernel_size-1)//2], and 'causal' means [(kernel_size-1), 0]
    # Recommend to use causal convolutions as it would increase the effective right context and therefore the look-ahead significantly
    conv_context_size: causal

    ### regularization
    dropout: 0.1 # The dropout used in most of the Conformer Modules
    dropout_pre_encoder: 0.1 # The dropout used before the encoder
    dropout_emb: 0.0 # The dropout used for embeddings
    dropout_att: 0.1 # The dropout for multi-headed attention modules

    # set to non-zero to enable stochastic depth
    stochastic_depth_drop_prob: 0.0
    stochastic_depth_mode: linear  # linear or uniform
    stochastic_depth_start_layer: 1

  decoder:
    _target_: nemo.collections.asr.modules.RNNTDecoder
    normalization_mode: null # Currently only null is supported for export.
    random_state_sampling: false # Random state sampling: https://arxiv.org/pdf/1910.11455.pdf
    blank_as_pad: true # This flag must be set in order to support exporting of RNNT models + efficient inference.

    prednet:
      pred_hidden: ${model.model_defaults.pred_hidden}
      pred_rnn_layers: 1
      t_max: null
      dropout: 0.2

  joint:
    _target_: nemo.collections.asr.modules.RNNTJoint
    log_softmax: null  # 'null' would set it automatically according to CPU/GPU device
    preserve_memory: false  # dramatically slows down training, but might preserve some memory

    # Fuses the computation of prediction net + joint net + loss + WER calculation
    # to be run on sub-batches of size `fused_batch_size`.
    # When this flag is set to true, consider the `batch_size` of *_ds to be just `encoder` batch size.
    # `fused_batch_size` is the actual batch size of the prediction net, joint net and transducer loss.
    # Using small values here will preserve a lot of memory during training, but will make training slower as well.
    # An optimal ratio of fused_batch_size : *_ds.batch_size is 1:1.
    # However, to preserve memory, this ratio can be 1:8 or even 1:16.
    # Extreme case of 1:B (i.e. fused_batch_size=1) should be avoided as training speed would be very slow.
    fuse_loss_wer: true
    fused_batch_size: 4

    jointnet:
      joint_hidden: ${model.model_defaults.joint_hidden}
      activation: "relu"
      dropout: 0.2

  decoding:
    strategy: "greedy_batch" # can be greedy, greedy_batch, beam, tsd, alsd.

    # greedy strategy config
    greedy:
      max_symbols: 10

    # beam strategy config
    beam:
      beam_size: 2
      return_best_hypothesis: False
      score_norm: true
      tsd_max_sym_exp: 50  # for Time Synchronous Decoding
      alsd_max_target_len: 2.0  # for Alignment-Length Synchronous Decoding

  aux_ctc:
    ctc_loss_weight: 0.3 # the weight used to combine the CTC loss with the RNNT loss
    use_cer: false
    ctc_reduction: 'mean_batch'
    decoder:
      _target_: nemo.collections.asr.modules.ConvASRDecoder
      feat_in: null
      num_classes: -1
      vocabulary: []
    decoding:
      strategy: "greedy"

  interctc:
    loss_weights: []
    apply_at_layers: []

  loss:
    loss_name: "default"
    warprnnt_numba_kwargs:
      # FastEmit regularization: https://arxiv.org/abs/2010.11148
      # You may enable FastEmit to increase the accuracy and reduce the latency of the model for streaming
      # You may set it to lower values like 1e-3 for models with larger right context
      fastemit_lambda: 5e-3  # Recommended values to be in range [1e-4, 1e-2], 0.001 is a good start.
      clamp: -1.0  # if > 0, applies gradient clamping in range [-clamp, clamp] for the joint tensor only.

  optim:
    name: adamw
    lr: 5.0
    # optimizer arguments
    betas: [0.9, 0.98]
    weight_decay: 1e-3

    # scheduler setup
    sched:
      name: NoamAnnealing
      d_model: ${model.encoder.d_model}
      # scheduler config override
      warmup_steps: 10000
      warmup_ratio: null
      min_lr: 1e-6

trainer:
  devices: -1 # number of GPUs, -1 would use all available GPUs
  num_nodes: 1
  max_epochs: 100
  max_steps: -1 # computed at runtime if not set
  val_check_interval: 1.0 # Set to 0.25 to check 4 times per epoch, or an int for number of iterations
  accelerator: auto
  strategy: ddp
  accumulate_grad_batches: 1
  gradient_clip_val: 1.0
  precision: 32 # 16, 32, or bf16
  log_every_n_steps: 40  # Interval of logging.
  enable_progress_bar: True
  num_sanity_val_steps: 0 # number of steps to perform validation steps for sanity check the validation process before starting the training, setting to 0 disables it
  check_val_every_n_epoch: 1 # number of evaluations on validation every n epochs
  sync_batchnorm: true
  enable_checkpointing: false  # Provided by exp_manager
  logger: false  # Provided by exp_manager
  benchmark: false # needs to be false for models with variable-length speech input as it slows down training

exp_manager:
  exp_dir: "/home/indeikin/PycharmProjects/NEMO_Project/fast_conformer/models"
  name: ${name}
  create_tensorboard_logger: true
  create_checkpoint_callback: true
  checkpoint_callback_params:
    # in case of multiple validation sets, first one is used
    monitor: "val_wer"
    mode: "min"
    save_top_k: 5
    always_save_nemo: True # saves the checkpoints as nemo files instead of PTL checkpoints
  resume_from_checkpoint: "/home/user/PycharmProjects/NEMO_Project/base_models/stt_en_fastconformer_hybrid_large_streaming_multi.nemo" # The path to a checkpoint file to continue the training, restores the whole state including the epoch, step, LR schedulers, apex, etc.
  resume_if_exists: false
  resume_ignore_no_checkpoint: false

  create_wandb_logger: false
  wandb_logger_kwargs:
    name: null
    project: null

But it gives me following error:

Restoring states from the checkpoint path at /home/user/PycharmProjects/NEMO_Project/base_models/stt_en_fastconformer_hybrid_large_streaming_multi.nemo
Error executing job with overrides: []
Error executing job with overrides: []
Error executing job with overrides: []
Traceback (most recent call last):
  File "/home/user/PycharmProjects/NEMO_Project/fast_conformer/speech_to_text_hybrid_rnnt_ctc_bpe.py", line 83, in main
    trainer.fit(asr_model)
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 532, in fit
    call._call_and_handle_interrupt(
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 42, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 93, in launch
    return function(*args, **kwargs)
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 571, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 946, in _run
    self._checkpoint_connector._restore_modules_and_callbacks(ckpt_path)
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py", line 399, in _restore_modules_and_callbacks
    self.resume_start(checkpoint_path)
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py", line 83, in resume_start
    loaded_checkpoint = self.trainer.strategy.load_checkpoint(checkpoint_path)
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 360, in load_checkpoint
    return self.checkpoint_io.load_checkpoint(checkpoint_path)
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/lightning_fabric/plugins/io/torch_io.py", line 91, in load_checkpoint
    return pl_load(path, map_location=map_location)
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/lightning_fabric/utilities/cloud_io.py", line 52, in _load
    return torch.load(f, map_location=map_location)  # type: ignore[arg-type]
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/torch/serialization.py", line 1028, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/torch/serialization.py", line 1231, in _legacy_load
    return legacy_load(f)
  File "/home/user/PycharmProjects/NEMO_Project/venv/lib/python3.10/site-packages/torch/serialization.py", line 1117, in legacy_load
    tar.extract('storages', path=tmpdir)
  File "/usr/local/lib/python3.10/tarfile.py", line 2288, in extract
    tarinfo = self._get_extract_tarinfo(member, filter_function, path)
  File "/usr/local/lib/python3.10/tarfile.py", line 2295, in _get_extract_tarinfo
    tarinfo = self.getmember(member)
  File "/usr/local/lib/python3.10/tarfile.py", line 1978, in getmember
    raise KeyError("filename %r not found" % name)
KeyError: "filename 'storages' not found"

I tried convert .nemo file to .ckpt with code like that:

import nemo.collections.asr as nemo_asr
import torch
model = nemo_asr.models.EncDecHybridRNNTCTCBPEModel.restore_from(restore_path="base_models/stt_en_fastconformer_hybrid_large_streaming_multi.nemo")
model.summarize()
state_dict = model.extract_state_dict_from('base_models/stt_en_fastconformer_hybrid_large_streaming_multi.nemo', save_dir='base_models/pt_ckpt/')

But it still give me the error. But in that case error looks like:

 return checkpoint["pytorch-lightning_version"]
KeyError: 'pytorch-lightning_version'

Any idea what I should change to fix that? Or may be I miss something?

titu1994 commented 9 months ago

To confirm, you're using a fine-tuning script from NeMo right ? The one inside examples ?

@VahidooX is there something up with the checkpoint ? The config seems ok.

titu1994 commented 9 months ago

Plus that's not exactly the write command for extract - the key is one of the modules inside of the actual model not the model name. But anyway we don't support inference or training with bare pt ckpt, only with NeMo files usually

VahidooX commented 9 months ago

Your model file looks like to be corrupted. Please download it and try again. Even training from scratch should work. In that issue, they used a very small batch size which is not easy to train an model with.

traidn commented 9 months ago

@VahidooX I've tried to use script for inference this model on audio file and it works fine. It makes a good transcription on english language, but when i try to use it in training it still throws the mistake.

traidn commented 9 months ago

@titu1994 I use script speech_to_text_hybrid_rnnt_ctc_bpe.py from your repo. It works fine for training from scratch (actually it doesn't converge but it at least does something), but it throws the mistake in case of fine tuning.

Full code:

import pytorch_lightning as pl
from omegaconf import OmegaConf

from nemo.collections.asr.models import EncDecHybridRNNTCTCBPEModel
from nemo.core.config import hydra_runner
from nemo.utils import logging
from nemo.utils.exp_manager import exp_manager

@hydra_runner(
    config_path="./conf", config_name="fastconformer_hybrid_transducer_ctc_bpe_streaming.yaml"
)
def main(cfg):
    logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')

    trainer = pl.Trainer(**cfg.trainer)
    exp_manager(trainer, cfg.get("exp_manager", None))
    asr_model = EncDecHybridRNNTCTCBPEModel(cfg=cfg.model, trainer=trainer)

    # Initialize the weights of the model from another model, if provided via config
    asr_model.maybe_init_from_pretrained_checkpoint(cfg)

    trainer.fit(asr_model)

    if hasattr(cfg.model, 'test_ds') and cfg.model.test_ds.manifest_filepath is not None:
        if asr_model.prepare_test(trainer):
            trainer.test(asr_model)

if __name__ == '__main__':
    main()  # noqa pylint: disable=no-value-for-parameter

May be I should use something else?

bene-ges commented 9 months ago

@traidn, maybe this is not an official way, but you can try

state_dict = torch.load("model_weights.ckpt", map_location=device)
asr_model.load_state_dict(state_dict)

you can get model_weights.ckpt if you unpack the nemo checkpoint with tar xvf

traidn commented 9 months ago

@bene-ges Yeah, it allows to get weight, but unfortunately it gives me error

return checkpoint["pytorch-lightning_version"]
KeyError: 'pytorch-lightning_version'

in the start of trainng.

bene-ges commented 9 months ago

@traidn - maybe you can try to load weights and then train like you did from scratch? Without resuming from checkpoint

traidn commented 9 months ago

@bene-ges Thanks for idea, but it still gives keyerror with pytorch pytorch-lightning_version when I load state dict. Even if I leave field "resume_from_checkpoint" empty.

VahidooX commented 9 months ago

I am going to try this model next week to make sure it is not a bug. Have you tried the latest nemo release or one of the old releases to convert and train the model?

traidn commented 9 months ago

@VahidooX That will be appreciated. I use nemo-toolkit 1.21.0 and pytorch 2.1.1.

VahidooX commented 9 months ago

In the meantime, would you please try an older nemo version for both conversion and training?

traidn commented 9 months ago

I tried a previous version. And it still doesn't work properly. Unfortunately I can't install more older version right now in my environment due to i have troubles with building wheels.

traidn commented 9 months ago

@VahidooX I downgraded Nemo to version 1.20.0 (when the models STT En FastConformer Hybrid Large Streaming 1040ms (doesn't train too) and STT En FastConformer Hybrid Transducer-CTC Large Streaming Multi were presented). I download config file from branch r1.20.0. But it still throws the error with KeyError: "filename 'storages' not found" . And one more question. Does this two models (STT En FastConformer Hybrid Large Streaming 1040ms and STT En FastConformer Hybrid Transducer-CTC Large Streaming Multi) differ by only one line in the config - att_context_size? Because both modelcard link to the same config file.

github-actions[bot] commented 8 months ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] commented 7 months ago

This issue was closed because it has been inactive for 7 days since being marked as stale.