Closed indifferen closed 3 years ago
I'm facing the same problem
@indifferen Did you solve the problem ?
@indifferen Did you solve the problem ?
add parameters to finetune config file:
task: _name: audio_pretraining data:??? max_sample_size: 250000 min_sample_size: 32000 sample_rate: 16000 autoregressive: false
then finetuning your model. (if you get other errors, just adding these parameters to the finetune config file)
I finetuned a wav2vec model : CUDA_VISIBLE_DEVICES=0 fairseq-hydra-train task.data=/data/dev-clean-2 model.w2v_path=/data/wav2vec_small.pt distributed_training.distributed_world_size=1 +optimization.update_freq='[24]' --config-direxamples/wav2vec/config/finetuning --config-name base_10m
when i run: python examples/speech_recognition/infer.py /data/dev-clean-2/ --task audio_pretraining --nbest 1 --path outputs/2021-01-09/08-48-56/checkpoints/checkpoint_best.pt --gen-subset valid --results-path /data/result --w2l-decoder viterbi --criterion ctc --labels ltr --max-tokens 2000000 --post-process letter
INFO:main:Namespace(all_gather_list_size=16384, autoregressive=False, azureml_logging=False, batch_size=None, batch_s ize_valid=None, beam=5, beam_size_token=100, beam_threshold=25.0, best_checkpoint_metric='loss', bf16=False, bpe=None, broadcast_buffers=False, bucket_cap_mb=25, checkpoint_shard_count=1, checkpoint_suffix='', constraints=None, cpu=False, cri terion='ctc', curriculum=0, data='/data/dev-clean-2/', data_buffer_size=10, dataset_impl=None, ddp_backend='c10d', decoding_format=None, device_id=0, disable_validation=False, distributed_backend='nccl', distributed_init_method=None, distributed_no_spawn=False, distributed_port=-1, distributed_rank=0, distributed_world_size=1, distributed_wrapper='DDP', diverse_beam_groups=-1, diverse_beam_strength=0.5, diversity_rate=-1.0, dump_emissions=None, dump_features=None, empty_cache_freq=0, enable_padding=False, eos=2, eval_wer=False, eval_wer_post_process='letter', eval_wer_tokenizer=None, fast_stat_sync=False, find_unused_parameters=False, finetune_from_model=None, fix_batches_to_gpus=False, fixed_validation_seed=None, force_anneal=None, fp16=False, fp16_init_scale=128, fp16_no_flatten_grads=False, fp16_scale_tolerance=0.0, fp16_scale_window=None, gen_subset='valid', heartbeat_timeout=-1, iter_decode_eos_penalty=0.0, iter_decode_force_max_iter=False, iter_dec ode_max_iter=10, iter_decode_with_beam=1, iter_decode_with_external_reranker=False, keep_best_checkpoints=-1, keep_interv al_updates=-1, keep_last_epochs=-1, kenlm_model=None, kspmodel=None, labels='ltr', lenpen=1, lexicon=None, lm_path=None,lm_weight=0.0, load_checkpoint_on_all_dp_ranks=False, load_emissions=None, localsgd_frequency=3, log_format=None, log_interval=100, lr_scheduler='fixed', lr_shrink=0.1, match_source_len=False, max_len_a=0, max_len_b=200, max_sample_size=None, max_tokens=2000000, max_tokens_valid=2000000, maximize_best_checkpoint_metric=False, memory_efficient_bf16=False, memory_efficient_fp16=False, min_len=1, min_loss_scale=0.0001, min_sample_size=None, model_overrides='{}', model_parallel_size=1, nbest=1, no_beamable_mm=False, no_early_stop=False, no_epoch_checkpoints=False, no_last_checkpoints=False, no_progress_bar=False, no_repeat_ngram_size=0, no_save=False, no_save_optimizer_state=False, no_seed_provided=False, normalize=False, nprocs_per_node=2, num_shards=1, num_workers=1, optimizer=None, optimizer_overrides='{}', pad=1, path='outputs/2021-01-09/08-48-56/checkpoints/checkpoint_best.pt', patience=-1, pipeline_balance=None, pipeline_checkpoint='never', pipeline_chunks=0, pipeline_decoder_balance=None, pipeline_decoder_devices=None, pipeline_devices=None, pipeline_encoder_balance=None, pipeline_encoder_devices=None, pipeline_model_parallel=False, post_process='letter', prefix_size=0, print_alignment=None, print_step=False, profile=False, quantization_config_path=None, quiet=False, replace_unk=None, required_batch_size_multiple=8, required_seq_len_multiple=1, reset_dataloader=False, reset_logging=True, reset_lr_scheduler=False, reset_meters=False, reset_optimizer=False, restore_file='checkpoint_last.pt', results_path='/data/result', retain_dropout=False, retain_dropout_modules=None, retain_iter_history=False, rnnt_decoding_type='greedy', rnnt_len_penalty=-0.5, sacrebleu=False,sample_rate=16000, sampling=False, sampling_topk=-1, sampling_topp=-1.0, save_dir='checkpoints', save_interval=1, save_interval_updates=0, score_reference=False, scoring='bleu', seed=1, shard_id=0, sil_weight=0.0, skip_invalid_size_inputs_valid_test=False, slowmo_algorithm='LocalSGD', slowmo_momentum=None, task='audiopretraining', temperature=1.0, tensorboard logdir=None, threshold_loss_scale=None, tokenizer=None, tpu=False, train_subset='train', unit_lm=False, unk=3, unk_weight =-inf, unkpen=0, unnormalized=False, user_dir=None, valid_subset='valid', validate_after_updates=0, validate_interval=1, validate_interval_updates=0, w2l_decoder='viterbi', wandb_project=None, warmup_updates=0, wer_args=None, wer_kenlm_model=None, wer_lexicon=None, wer_lm_weight=2.0, wer_word_score=-1.0, wfstlm=None, word_score=1.0, zero_infinity=False, zero_sharding='none') INFO:main:| decoding with criterion ctc INFO:main:| loading model(s) from outputs/2021-01-09/08-48-56/checkpoints/checkpoint_best.pt Traceback (most recent call last): File "examples/speech_recognition/infer.py", line 428, in
cli_main()
File "examples/speech_recognition/infer.py", line 424, in cli_main
main(args)
File "examples/speech_recognition/infer.py", line 240, in main
task.load_dataset(args.gen_subset, task_cfg=saved_cfg.task)
File "/root/fairseq/fairseq/tasks/audio_pretraining.py", line 139, in load_dataset
sample_rate=task_cfg.sample_rate,
File "/usr/local/lib/python3.6/dist-packages/omegaconf/dictconfig.py", line 297, in getattr
self._format_and_raise(key=key, value=None, cause=e)
File "/usr/local/lib/python3.6/dist-packages/omegaconf/base.py", line 101, in _format_and_raise
type_override=type_override,
File "/usr/local/lib/python3.6/dist-packages/omegaconf/_utils.py", line 629, in format_and_raise
_raise(ex, cause)
File "/usr/local/lib/python3.6/dist-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
File "/usr/local/lib/python3.6/dist-packages/omegaconf/dictconfig.py", line 295, in getattr
return self._get_impl(key=key, default_value=DEFAULT_VALUE_MARKER)
File "/usr/local/lib/python3.6/dist-packages/omegaconf/dictconfig.py", line 353, in _get_impl
node = self._get_node(key=key)
File "/usr/local/lib/python3.6/dist-packages/omegaconf/dictconfig.py", line 375, in _get_node
self._validate_get(key)
File "/usr/local/lib/python3.6/dist-packages/omegaconf/dictconfig.py", line 128, in _validate_get
key=key, value=value, cause=ConfigAttributeError(msg)
File "/usr/local/lib/python3.6/dist-packages/omegaconf/base.py", line 101, in _format_and_raise
type_override=type_override,
File "/usr/local/lib/python3.6/dist-packages/omegaconf/_utils.py", line 694, in format_and_raise
_raise(ex, cause)
File "/usr/local/lib/python3.6/dist-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
omegaconf.errors.ConfigAttributeError: Key 'sample_rate' is not in struct
full_key: task.sample_rate
reference_type=Any
object_type=dict
I tried wav2vec_small_960h.pt, it works. Is there something wrong with my fintuning?