facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.22k stars 6.38k forks source link

Empty and random transcripts after finetuning wav2vec2. #5417

Open arvindmn01 opened 8 months ago

arvindmn01 commented 8 months ago

Hi @alexeib @patrickvonplaten , I have fine-tuned wav2vec2 models, specifically large-lv60, base, base-960h, and large-960, on Indian English data from four speakers. However, I am getting the empty or random transcripts after fine-tuning.

the below are the training arguments. I have used.

model = Wav2Vec2ForCTC.from_pretrained(model_id,
            ctc_loss_reduction = "mean",
            pad_token_id=processor.tokenizer.pad_token_id,
            vocab_size=len(processor.tokenizer))

# I have tried freezing feature extractor as well.
#model.freeze_feature_extractor()
model.freeze_feature_encoder()

training_args = TrainingArguments(
  output_dir="wav2vec2_large_5k_ckpts",
  group_by_length=False,
  per_device_train_batch_size=8,
  gradient_accumulation_steps=8,
  evaluation_strategy="steps",
  num_train_epochs=50,
  fp16=True,
  gradient_checkpointing=True,
# tried different eval, save, logging and warmup steps.
  save_steps=200, 
  eval_steps=200,
  logging_steps=200,
  learning_rate=1e-5,
#   weight_decay=0.005,
  warmup_steps=200,
  save_total_limit=2,
  push_to_hub=False,
  do_train=True,
  do_eval=True,
  load_best_model_at_end=True,
  metric_for_best_model="wer",
  greater_is_better=False,
  #report_to="tensorboard",
  #seed=101,
)

trainer = Trainer(
    model=model,
    data_collator=data_collator,
    args=training_args,
    compute_metrics=compute_metrics,
    train_dataset=ds_prepared["train"],
    eval_dataset=ds_prepared["validation"],
    tokenizer=processor.feature_extractor,
    # callbacks = [EarlyStoppingCallback(early_stopping_patience=3)]
)

I have experimented with various learning rates, such as 1e-7 and 1e-9 . Additionally, I varied the dataset sizes, including 5k, 2k, 1k, and 15k data points; however, I consistently obtained the same results.

I have followed this finetuning code. https://huggingface.co/blog/fine-tune-wav2vec2-english.

Training Code for your reference. model_training.txt prepare_dataset.txt prepare_tokenizer.txt

sangvu0909 commented 1 month ago

I have same issues, do you have any solutions for that?