facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.12k stars 6.36k forks source link

Decoder not compatible when inference using Wav2Vec2_Seq2Seq model #4596

Open effendijohanes opened 2 years ago

effendijohanes commented 2 years ago

🐛 Bug

When inference using wav2vec_seq2seq, the decoder is not compatible with the model

To Reproduce

Steps to reproduce the behavior (always include the command you ran):

I run the inference command as follows: python fairseq/examples/speech_recognition/infer.py DATA --task audio_finetuning --nbest 1 --path checkpoint_best.pt --gen-subset dev --results-path ./ --w2l-decoder viterbi --criterion label_smoothed_cross_entropy --labels ltr --max-tokens 1000000 --post-process sentencepiece --autoregressive

Get error: Namespace(all_gather_list_size=16384, amp=False, amp_batch_retries=2, amp_init_scale=128, amp_scale_window=None, autoregressive=True, azureml_logging=False, batch_size=None, batch_size_valid=None, beam=5, beam_size_token=100, beam_threshold=25.0, best_checkpoint_metric='loss', bf16=False, binarized_dataset=False, bpe=None, broadcast_buffers=False, bucket_cap_mb=25, checkpoint_shard_count=1, checkpoint_suffix='', combine_valid_subsets=None, constraints=None, continue_once=None, cpu=False, cpu_offload=False, criterion='label_smoothed_cross_entropy', curriculum=0, data='DATA', data_buffer_size=10, dataset_impl=None, ddp_backend='pytorch_ddp', ddp_comm_hook='none', decoding_format=None, device_id=0, disable_validation=False, distributed_backend='nccl', distributed_init_method=None, distributed_no_spawn=False, distributed_num_procs=8, distributed_port=-1, distributed_rank=0, distributed_world_size=1, diverse_beam_groups=-1, diverse_beam_strength=0.5, diversity_rate=-1.0, dump_emissions=None, dump_features=None, empty_cache_freq=0, enable_padding=False, eos=2, eval_bleu=False, eval_bleu_args='{}', eval_bleu_detok=None, eval_bleu_detok_args='{}', eval_bleu_print_samples=False, eval_bleu_remove_bpe=None, eval_tokenized_bleu=False, eval_wer=False, eval_wer_post_process='letter', eval_wer_tokenizer=None, fast_stat_sync=False, find_unused_parameters=False, finetune_from_model=None, fix_batches_to_gpus=False, fixed_validation_seed=None, force_anneal=None, fp16=False, fp16_init_scale=128, fp16_no_flatten_grads=False, fp16_scale_tolerance=0.0, fp16_scale_window=None, fp32_reduce_scatter=False, gen_subset='dev', gradient_as_bucket_view=False, grouped_shuffling=False, heartbeat_timeout=-1, ignore_prefix_size=0, ignore_unused_valid_subsets=False, inferred_w2v_config=None, iter_decode_eos_penalty=0.0, iter_decode_force_max_iter=False, iter_decode_max_iter=10, iter_decode_with_beam=1, iter_decode_with_external_reranker=False, keep_best_checkpoints=-1, keep_interval_updates=-1, keep_interval_updates_pattern=-1, keep_last_epochs=-1, kenlm_model=None, kspmodel=None, label_smoothing=0.0, labels='ltr', lenpen=1, lexicon=None, lm_path=None, lm_weight=0.0, load_checkpoint_on_all_dp_ranks=False, load_emissions=None, localsgd_frequency=3, log_file=None, log_format=None, log_interval=100, lr_scheduler='fixed', lr_shrink=0.1, match_source_len=False, max_len_a=0, max_len_b=200, max_sample_size=None, max_tokens=1000000, max_tokens_valid=1000000, max_valid_steps=None, maximize_best_checkpoint_metric=False, memory_efficient_bf16=False, memory_efficient_fp16=False, min_len=1, min_loss_scale=0.0001, min_sample_size=None, model_overrides='{}', model_parallel_size=1, nbest=1, no_beamable_mm=False, no_early_stop=False, no_epoch_checkpoints=False, no_last_checkpoints=False, no_progress_bar=False, no_repeat_ngram_size=0, no_reshard_after_forward=False, no_save=False, no_save_optimizer_state=False, no_seed_provided=False, normalize=False, not_fsdp_flatten_parameters=False, nprocs_per_node=8, num_batch_buckets=0, num_shards=1, num_workers=1, on_cpu_convert_precision=False, optimizer=None, optimizer_overrides='{}', pad=1, path='checkpoint_best.pt', patience=-1, pipeline_balance=None, pipeline_checkpoint='never', pipeline_chunks=0, pipeline_decoder_balance=None, pipeline_decoder_devices=None, pipeline_devices=None, pipeline_encoder_balance=None, pipeline_encoder_devices=None, pipeline_model_parallel=False, plasma_path='/tmp/plasma', post_process='sentencepiece', precompute_mask_indices=False, prefix_size=0, print_alignment=None, print_step=False, profile=False, quantization_config_path=None, quiet=False, replace_unk=None, report_accuracy=False, required_batch_size_multiple=8, required_seq_len_multiple=1, reset_dataloader=False, reset_logging=False, reset_lr_scheduler=False, reset_meters=False, reset_optimizer=False, restore_file='checkpoint_last.pt', results_path='./', retain_dropout=False, retain_dropout_modules=None, retain_iter_history=False, rnnt_decoding_type='greedy', rnnt_len_penalty=-0.5, sacrebleu=False, sample_rate=16000, sampling=False, sampling_topk=-1, sampling_topp=-1.0, save_dir='checkpoints', save_interval=1, save_interval_updates=0, score_reference=False, scoring='bleu', seed=1, shard_id=0, sil_weight=0.0, simul_type=None, skip_invalid_size_inputs_valid_test=False, slowmo_base_algorithm='localsgd', slowmo_momentum=None, suppress_crashes=False, task='audio_finetuning', temperature=1.0, tensorboard_logdir=None, text_compression_level='none', threshold_loss_scale=None, tokenizer=None, tpu=False, train_subset='train', unit_lm=False, unk=3, unk_weight=-inf, unkpen=0, unnormalized=False, update_epoch_batch_itr=False, update_ordered_indices_seed=False, use_plasma_view=False, use_sharded_state=False, user_dir=None, valid_subset='valid', validate_after_updates=0, validate_interval=1, validate_interval_updates=0, w2l_decoder='viterbi', wandb_project=None, warmup_updates=0, wfstlm=None, word_score=1.0, write_checkpoints_asynchronously=False, zero_sharding='none')

Expected behavior

decoding starts..

Am I using the wrong decoder? If then which one is correct?

Moreover, is there any documentation on how to use wav2vec2_seq2seq, similar with how to use wav2vec2?

Environment

SeunghyunSEO commented 2 years ago

Decoder in speech_recognition directory is for CTC trained AM model. you can use fairseq-generate for seq2seq beam search.