facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.54k stars 6.41k forks source link

Wave2Vec2 finetuning error #5146

Closed BakingBrains closed 1 year ago

BakingBrains commented 1 year ago

I have pretrained Wav2vec2 Conformer model. Now when I run finetuning, there is a mismatch in weight file key

2023-05-24 12:08:08.964865: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-05-24 12:08:11 | INFO | fairseq.tasks.text_to_speech | Please install tensorboardX: pip install tensorboardX
[2023-05-24 12:08:16,118][fairseq_cli.train][INFO] - {'_name': None, 'common': {'_name': None, 'no_progress_bar': False, 'log_interval': 200, 'log_format': 'json', 'log_file': None, 'aim_repo': None, 'aim_run_hash': None, 'tensorboard_logdir': None, 'wandb_project': None, 'azureml_logging': False, 'seed': 1, 'cpu': False, 'tpu': False, 'bf16': False, 'memory_efficient_bf16': False, 'fp16': True, 'memory_efficient_fp16': False, 'fp16_no_flatten_grads': False, 'fp16_init_scale': 128, 'fp16_scale_window': None, 'fp16_scale_tolerance': 0.0, 'on_cpu_convert_precision': False, 'min_loss_scale': 0.0001, 'threshold_loss_scale': None, 'amp': False, 'amp_batch_retries': 2, 'amp_init_scale': 128, 'amp_scale_window': None, 'user_dir': None, 'empty_cache_freq': 0, 'all_gather_list_size': 16384, 'model_parallel_size': 1, 'quantization_config_path': None, 'profile': False, 'reset_logging': False, 'suppress_crashes': False, 'use_plasma_view': False, 'plasma_path': '/tmp/plasma'}, 'common_eval': {'_name': None, 'path': None, 'post_process': None, 'quiet': False, 'model_overrides': '{}', 'results_path': None}, 'distributed_training': {'_name': None, 'distributed_world_size': 1, 'distributed_num_procs': 1, 'distributed_rank': 0, 'distributed_backend': 'nccl', 'distributed_init_method': None, 'distributed_port': -1, 'device_id': 0, 'distributed_no_spawn': False, 'ddp_backend': 'legacy_ddp', 'ddp_comm_hook': 'none', 'bucket_cap_mb': 25, 'fix_batches_to_gpus': False, 'find_unused_parameters': False, 'gradient_as_bucket_view': False, 'fast_stat_sync': False, 'heartbeat_timeout': -1, 'broadcast_buffers': False, 'slowmo_momentum': None, 'slowmo_base_algorithm': 'localsgd', 'localsgd_frequency': 3, 'nprocs_per_node': 1, 'pipeline_model_parallel': False, 'pipeline_balance': None, 'pipeline_devices': None, 'pipeline_chunks': 0, 'pipeline_encoder_balance': None, 'pipeline_encoder_devices': None, 'pipeline_decoder_balance': None, 'pipeline_decoder_devices': None, 'pipeline_checkpoint': 'never', 'zero_sharding': 'none', 'fp16': True, 'memory_efficient_fp16': False, 'tpu': False, 'no_reshard_after_forward': False, 'fp32_reduce_scatter': False, 'cpu_offload': False, 'use_sharded_state': False, 'not_fsdp_flatten_parameters': False}, 'dataset': {'_name': None, 'num_workers': 6, 'skip_invalid_size_inputs_valid_test': True, 'max_tokens': 3200000, 'batch_size': None, 'required_batch_size_multiple': 8, 'required_seq_len_multiple': 1, 'dataset_impl': None, 'data_buffer_size': 10, 'train_subset': 'train', 'valid_subset': 'dev_other', 'combine_valid_subsets': None, 'ignore_unused_valid_subsets': False, 'validate_interval': 50, 'validate_interval_updates': 0, 'validate_after_updates': 10000, 'fixed_validation_seed': None, 'disable_validation': False, 'max_tokens_valid': 3200000, 'batch_size_valid': None, 'max_valid_steps': None, 'curriculum': 0, 'gen_subset': 'test', 'num_shards': 1, 'shard_id': 0, 'grouped_shuffling': False, 'update_epoch_batch_itr': False, 'update_ordered_indices_seed': False}, 'optimization': {'_name': None, 'max_epoch': 0, 'max_update': 20000, 'stop_time_hours': 0.0, 'clip_norm': 0.0, 'sentence_avg': True, 'update_freq': [4], 'lr': [5e-05], 'stop_min_lr': -1.0, 'use_bmuf': False, 'skip_remainder_batch': False, 'debug_param_names': False}, 'checkpoint': {'_name': None, 'save_dir': '/content/drive/MyDrive/W2V', 'restore_file': 'checkpoint_last.pt', 'continue_once': None, 'finetune_from_model': None, 'reset_dataloader': False, 'reset_lr_scheduler': False, 'reset_meters': False, 'reset_optimizer': False, 'optimizer_overrides': '{}', 'save_interval': 50, 'save_interval_updates': 10000, 'keep_interval_updates': 1, 'keep_interval_updates_pattern': -1, 'keep_last_epochs': -1, 'keep_best_checkpoints': -1, 'no_save': False, 'no_epoch_checkpoints': True, 'no_last_checkpoints': False, 'no_save_optimizer_state': False, 'best_checkpoint_metric': 'wer', 'maximize_best_checkpoint_metric': False, 'patience': -1, 'checkpoint_suffix': '', 'checkpoint_shard_count': 1, 'load_checkpoint_on_all_dp_ranks': False, 'write_checkpoints_asynchronously': False, 'model_parallel_size': 1}, 'bmuf': {'_name': None, 'block_lr': 1.0, 'block_momentum': 0.875, 'global_sync_iter': 50, 'warmup_iterations': 500, 'use_nbm': False, 'average_sync': False, 'distributed_world_size': 1}, 'generation': {'_name': None, 'beam': 5, 'beam_mt': 0, 'nbest': 1, 'max_len_a': 0.0, 'max_len_b': 200, 'max_len_a_mt': 0.0, 'max_len_b_mt': 200, 'min_len': 1, 'match_source_len': False, 'unnormalized': False, 'no_early_stop': False, 'no_beamable_mm': False, 'lenpen': 1.0, 'lenpen_mt': 1.0, 'unkpen': 0.0, 'replace_unk': None, 'sacrebleu': False, 'score_reference': False, 'prefix_size': 0, 'no_repeat_ngram_size': 0, 'sampling': False, 'sampling_topk': -1, 'sampling_topp': -1.0, 'constraints': None, 'temperature': 1.0, 'diverse_beam_groups': -1, 'diverse_beam_strength': 0.5, 'diversity_rate': -1.0, 'print_alignment': None, 'print_step': False, 'lm_path': None, 'lm_weight': 0.0, 'iter_decode_eos_penalty': 0.0, 'iter_decode_max_iter': 10, 'iter_decode_force_max_iter': False, 'iter_decode_with_beam': 1, 'iter_decode_with_external_reranker': False, 'retain_iter_history': False, 'retain_dropout': False, 'retain_dropout_modules': None, 'decoding_format': None, 'no_seed_provided': False, 'eos_token': None}, 'eval_lm': {'_name': None, 'output_word_probs': False, 'output_word_stats': False, 'context_window': 0, 'softmax_batch': 9223372036854775807}, 'interactive': {'_name': None, 'buffer_size': 0, 'input': '-'}, 'model': {'_name': 'wav2vec_ctc', 'w2v_path': '/content/checkpoint_60_450000.pt', 'no_pretrained_weights': False, 'dropout_input': 0.0, 'final_dropout': 0.0, 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'apply_mask': True, 'mask_length': 10, 'mask_prob': 0.65, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'require_same_masks': True, 'mask_dropout': 0.0, 'mask_channel_length': 64, 'mask_channel_prob': 0.5, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'freeze_finetune_updates': 10000, 'feature_grad_mult': 0.0, 'layerdrop': 0.05, 'drop_path': 0.0, 'mask_channel_min_space': 1, 'mask_channel_before': False, 'normalize': False, 'update_alibi': True, 'data': '/content/wav_manifest', 'w2v_args': None, 'offload_activations': False, 'min_params_to_wrap': 100000000, 'checkpoint_activations': False, 'ddp_backend': 'legacy_ddp', 'zero_mask': False, 'load_ema': False, 'layer_decay': 1.0, 'layer_type': transformer, 'adp_num': -1, 'adp_dim': 64, 'adp_act_fn': 'relu', 'adp_trf_idx': 'all', 'freeze_regex': None, 'blank_weight': 0.0, 'blank_mode': 'add'}, 'task': {'_name': 'audio_finetuning', 'data': '/content/wav_manifest', 'labels': 'ltr', 'multi_corpus_keys': None, 'multi_corpus_sampling_weights': None, 'binarized_dataset': False, 'sample_rate': 16000, 'normalize': False, 'enable_padding': False, 'max_sample_size': None, 'min_sample_size': None, 'num_batch_buckets': 0, 'tpu': False, 'text_compression_level': none, 'rebuild_batches': True, 'precompute_mask_config': None, 'post_save_script': None, 'subsample': 1.0, 'seed': 1, 'eval_wer': False, 'eval_wer_config': {'_name': None, 'beam': 5, 'beam_mt': 0, 'nbest': 1, 'max_len_a': 0.0, 'max_len_b': 200, 'max_len_a_mt': 0.0, 'max_len_b_mt': 200, 'min_len': 1, 'match_source_len': False, 'unnormalized': False, 'no_early_stop': False, 'no_beamable_mm': False, 'lenpen': 1.0, 'lenpen_mt': 1.0, 'unkpen': 0.0, 'replace_unk': None, 'sacrebleu': False, 'score_reference': False, 'prefix_size': 0, 'no_repeat_ngram_size': 0, 'sampling': False, 'sampling_topk': -1, 'sampling_topp': -1.0, 'constraints': None, 'temperature': 1.0, 'diverse_beam_groups': -1, 'diverse_beam_strength': 0.5, 'diversity_rate': -1.0, 'print_alignment': None, 'print_step': False, 'lm_path': None, 'lm_weight': 0.0, 'iter_decode_eos_penalty': 0.0, 'iter_decode_max_iter': 10, 'iter_decode_force_max_iter': False, 'iter_decode_with_beam': 1, 'iter_decode_with_external_reranker': False, 'retain_iter_history': False, 'retain_dropout': False, 'retain_dropout_modules': None, 'decoding_format': None, 'no_seed_provided': False, 'eos_token': None}, 'eval_wer_tokenizer': None, 'eval_wer_post_process': 'letter', 'eval_bleu': False, 'eval_bleu_detok': None, 'eval_bleu_detok_args': '{}', 'eval_tokenized_bleu': False, 'eval_bleu_remove_bpe': None, 'eval_bleu_args': '{}', 'eval_bleu_print_samples': False, 'autoregressive': False, 'target_dictionary': None}, 'criterion': {'_name': 'ctc', 'zero_infinity': True, 'sentence_avg': True, 'post_process': 'letter', 'wer_kenlm_model': None, 'wer_lexicon': None, 'wer_lm_weight': 2.0, 'wer_word_score': -1.0, 'wer_sil_weight': 0.0, 'wer_args': None}, 'optimizer': {'_name': 'adam', 'adam_betas': '(0.9,0.98)', 'adam_eps': 1e-08, 'weight_decay': 0.0, 'use_old_adam': False, 'fp16_adam_stats': False, 'tpu': False, 'lr': [5e-05]}, 'lr_scheduler': {'_name': 'tri_stage', 'warmup_steps': 0, 'hold_steps': 0, 'decay_steps': 0, 'phase_ratio': [0.1, 0.4, 0.5], 'init_lr_scale': 0.01, 'final_lr_scale': 0.05, 'max_update': 20000.0, 'lr': [5e-05]}, 'scoring': None, 'bpe': None, 'tokenizer': None, 'ema': {'_name': None, 'store_ema': False, 'ema_decay': 0.9999, 'ema_start_update': 0, 'ema_seed_model': None, 'ema_update_freq': 1, 'ema_fp32': False}, 'job_logging_cfg': {'version': 1, 'formatters': {'simple': {'format': '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'}}, 'handlers': {'console': {'class': 'logging.StreamHandler', 'formatter': 'simple', 'stream': 'ext://sys.stdout'}, 'file': {'class': 'logging.FileHandler', 'formatter': 'simple', 'filename': 'hydra_train.log'}}, 'root': {'level': 'INFO', 'handlers': ['console', 'file']}, 'disable_existing_loggers': False}}
[2023-05-24 12:08:16,143][fairseq.tasks.audio_finetuning][INFO] - Using dict_path : /content/wav_manifest/dict.ltr.txt
[2023-05-24 12:08:42,185][fairseq.models.wav2vec.wav2vec2_asr][INFO] - {'_name': None, 'common': {'_name': None, 'no_progress_bar': False, 'log_interval': 200, 'log_format': 'json', 'log_file': '/Syed/Projects/LingoTraining/w2v2_pre_train_model/train.txt', 'aim_repo': None, 'aim_run_hash': None, 'tensorboard_logdir': None, 'wandb_project': None, 'azureml_logging': False, 'seed': 1, 'cpu': False, 'tpu': False, 'bf16': False, 'memory_efficient_bf16': False, 'fp16': True, 'memory_efficient_fp16': False, 'fp16_no_flatten_grads': False, 'fp16_init_scale': 128, 'fp16_scale_window': None, 'fp16_scale_tolerance': 0.0, 'on_cpu_convert_precision': False, 'min_loss_scale': 0.0001, 'threshold_loss_scale': None, 'amp': False, 'amp_batch_retries': 2, 'amp_init_scale': 128, 'amp_scale_window': None, 'user_dir': None, 'empty_cache_freq': 0, 'all_gather_list_size': 16384, 'model_parallel_size': 1, 'quantization_config_path': None, 'profile': False, 'reset_logging': False, 'suppress_crashes': False, 'use_plasma_view': False, 'plasma_path': '/tmp/plasma'}, 'common_eval': {'_name': None, 'path': None, 'post_process': None, 'quiet': False, 'model_overrides': '{}', 'results_path': None}, 'distributed_training': {'_name': None, 'distributed_world_size': 1, 'distributed_num_procs': 1, 'distributed_rank': 0, 'distributed_backend': 'nccl', 'distributed_init_method': None, 'distributed_port': -1, 'device_id': 0, 'distributed_no_spawn': False, 'ddp_backend': 'legacy_ddp', 'ddp_comm_hook': 'none', 'bucket_cap_mb': 25, 'fix_batches_to_gpus': False, 'find_unused_parameters': False, 'gradient_as_bucket_view': False, 'fast_stat_sync': False, 'heartbeat_timeout': -1, 'broadcast_buffers': False, 'slowmo_momentum': None, 'slowmo_base_algorithm': 'localsgd', 'localsgd_frequency': 3, 'nprocs_per_node': 1, 'pipeline_model_parallel': False, 'pipeline_balance': None, 'pipeline_devices': None, 'pipeline_chunks': 0, 'pipeline_encoder_balance': None, 'pipeline_encoder_devices': None, 'pipeline_decoder_balance': None, 'pipeline_decoder_devices': None, 'pipeline_checkpoint': 'never', 'zero_sharding': 'none', 'fp16': True, 'memory_efficient_fp16': False, 'tpu': False, 'no_reshard_after_forward': False, 'fp32_reduce_scatter': False, 'cpu_offload': False, 'use_sharded_state': False, 'not_fsdp_flatten_parameters': False}, 'dataset': {'_name': None, 'num_workers': 4, 'skip_invalid_size_inputs_valid_test': True, 'max_tokens': 700000, 'batch_size': None, 'required_batch_size_multiple': 8, 'required_seq_len_multiple': 1, 'dataset_impl': None, 'data_buffer_size': 10, 'train_subset': 'train', 'valid_subset': 'valid', 'combine_valid_subsets': None, 'ignore_unused_valid_subsets': False, 'validate_interval': 1, 'validate_interval_updates': 0, 'validate_after_updates': 0, 'fixed_validation_seed': None, 'disable_validation': False, 'max_tokens_valid': 700000, 'batch_size_valid': None, 'max_valid_steps': None, 'curriculum': 0, 'gen_subset': 'test', 'num_shards': 1, 'shard_id': 0, 'grouped_shuffling': False, 'update_epoch_batch_itr': False, 'update_ordered_indices_seed': False}, 'optimization': {'_name': None, 'max_epoch': 500, 'max_update': 40000000, 'stop_time_hours': 0.0, 'clip_norm': 0.0, 'sentence_avg': False, 'update_freq': [1], 'lr': [0.0001], 'stop_min_lr': -1.0, 'use_bmuf': False, 'skip_remainder_batch': False, 'debug_param_names': False}, 'checkpoint': {'_name': None, 'save_dir': '/Syed/Projects/LingoTraining/w2v2_pre_train_model', 'restore_file': '/Syed/Projects/LingoTraining/w2v2_pre_train_model/checkpoint_54_400000.pt', 'continue_once': None, 'finetune_from_model': None, 'reset_dataloader': False, 'reset_lr_scheduler': False, 'reset_meters': False, 'reset_optimizer': False, 'optimizer_overrides': '{}', 'save_interval': 1, 'save_interval_updates': 25000, 'keep_interval_updates': 1, 'keep_interval_updates_pattern': -1, 'keep_last_epochs': -1, 'keep_best_checkpoints': -1, 'no_save': False, 'no_epoch_checkpoints': True, 'no_last_checkpoints': False, 'no_save_optimizer_state': False, 'best_checkpoint_metric': 'loss', 'maximize_best_checkpoint_metric': False, 'patience': -1, 'checkpoint_suffix': '', 'checkpoint_shard_count': 1, 'load_checkpoint_on_all_dp_ranks': False, 'write_checkpoints_asynchronously': False, 'model_parallel_size': 1}, 'bmuf': {'_name': None, 'block_lr': 1.0, 'block_momentum': 0.875, 'global_sync_iter': 50, 'warmup_iterations': 500, 'use_nbm': False, 'average_sync': False, 'distributed_world_size': 1}, 'generation': {'_name': None, 'beam': 5, 'beam_mt': 0, 'nbest': 1, 'max_len_a': 0.0, 'max_len_b': 200, 'max_len_a_mt': 0.0, 'max_len_b_mt': 200, 'min_len': 1, 'match_source_len': False, 'unnormalized': False, 'no_early_stop': False, 'no_beamable_mm': False, 'lenpen': 1.0, 'lenpen_mt': 1.0, 'unkpen': 0.0, 'replace_unk': None, 'sacrebleu': False, 'score_reference': False, 'prefix_size': 0, 'no_repeat_ngram_size': 0, 'sampling': False, 'sampling_topk': -1, 'sampling_topp': -1.0, 'constraints': None, 'temperature': 1.0, 'diverse_beam_groups': -1, 'diverse_beam_strength': 0.5, 'diversity_rate': -1.0, 'print_alignment': None, 'print_step': False, 'lm_path': None, 'lm_weight': 0.0, 'iter_decode_eos_penalty': 0.0, 'iter_decode_max_iter': 10, 'iter_decode_force_max_iter': False, 'iter_decode_with_beam': 1, 'iter_decode_with_external_reranker': False, 'retain_iter_history': False, 'retain_dropout': False, 'retain_dropout_modules': None, 'decoding_format': None, 'no_seed_provided': False, 'eos_token': None}, 'eval_lm': {'_name': None, 'output_word_probs': False, 'output_word_stats': False, 'context_window': 0, 'softmax_batch': 9223372036854775807}, 'interactive': {'_name': None, 'buffer_size': 0, 'input': '-'}, 'model': {'_name': 'wav2vec2', 'extractor_mode': 'default', 'encoder_layers': 12, 'encoder_embed_dim': 768, 'encoder_ffn_embed_dim': 3072, 'encoder_attention_heads': 12, 'activation_fn': 'gelu', 'layer_type': 'conformer', 'dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'encoder_layerdrop': 0.05, 'dropout_input': 0.0, 'dropout_features': 0.1, 'final_dim': 256, 'layer_norm_first': False, 'conv_feature_layers': '[(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512,2,2)] + [(512,2,2)]', 'conv_bias': False, 'logit_temp': 0.1, 'quantize_targets': True, 'quantize_input': False, 'same_quantizer': False, 'target_glu': False, 'feature_grad_mult': 0.0, 'quantizer_depth': 1, 'quantizer_factor': 3, 'latent_vars': 320, 'latent_groups': 2, 'latent_dim': 0, 'mask_length': 10, 'mask_prob': 0.65, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'require_same_masks': True, 'mask_dropout': 0.0, 'mask_channel_length': 64, 'mask_channel_prob': 0.5, 'mask_channel_before': False, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'num_negatives': 100, 'negatives_from_everywhere': False, 'cross_sample_negatives': 0, 'codebook_negatives': 0, 'conv_pos': 128, 'conv_pos_groups': 16, 'pos_conv_depth': 1, 'latent_temp': [2.0, 0.5, 0.999995], 'max_positions': 100000, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'crop_seq_to_multiple': 1, 'depthwise_conv_kernel_size': 31, 'attn_type': 'espnet', 'pos_enc_type': 'rel_pos', 'fp16': False}, 'task': {'_name': 'audio_pretraining', 'data': '/Syed/Projects/LingoTraining/wav_manifest', 'labels': None, 'binarized_dataset': False, 'sample_rate': 16000, 'normalize': False, 'enable_padding': False, 'max_sample_size': 250000, 'min_sample_size': 32000, 'num_batch_buckets': 0, 'tpu': False, 'text_compression_level': 'none', 'rebuild_batches': True, 'precompute_mask_config': None, 'post_save_script': None, 'subsample': 1.0, 'seed': 1}, 'criterion': None, 'optimizer': {'_name': 'adam', 'adam_betas': '(0.9,0.98)', 'adam_eps': 1e-06, 'weight_decay': 0.01, 'use_old_adam': False, 'fp16_adam_stats': False, 'tpu': False, 'lr': [0.0001]}, 'lr_scheduler': None, 'scoring': None, 'bpe': None, 'tokenizer': None, 'ema': {'_name': None, 'store_ema': False, 'ema_decay': 0.9999, 'ema_start_update': 0, 'ema_seed_model': None, 'ema_update_freq': 1, 'ema_fp32': False}, 'job_logging_cfg': {'version': 1, 'formatters': {'simple': {'format': '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'}}, 'handlers': {'console': {'class': 'logging.StreamHandler', 'formatter': 'simple', 'stream': 'ext://sys.stdout'}, 'file': {'class': 'logging.FileHandler', 'formatter': 'simple', 'filename': 'hydra_train.log'}}, 'root': {'level': 'INFO', 'handlers': ['console', 'file']}, 'disable_existing_loggers': False}}
Wav2Vec2Model(
  (feature_extractor): ConvFeatureExtractionModel(
    (conv_layers): ModuleList(
      (0): Sequential(
        (0): Conv1d(1, 512, kernel_size=(10,), stride=(5,), bias=False)
        (1): Dropout(p=0.0, inplace=False)
        (2): Fp32GroupNorm(512, 512, eps=1e-05, affine=True)
        (3): GELU(approximate='none')
      )
      (1-4): 4 x Sequential(
        (0): Conv1d(512, 512, kernel_size=(3,), stride=(2,), bias=False)
        (1): Dropout(p=0.0, inplace=False)
        (2): GELU(approximate='none')
      )
      (5-6): 2 x Sequential(
        (0): Conv1d(512, 512, kernel_size=(2,), stride=(2,), bias=False)
        (1): Dropout(p=0.0, inplace=False)
        (2): GELU(approximate='none')
      )
    )
  )
  (post_extract_proj): Linear(in_features=512, out_features=768, bias=True)
  (dropout_input): Dropout(p=0.0, inplace=False)
  (dropout_features): Dropout(p=0.1, inplace=False)
  (quantizer): None
  (project_q): None
  (encoder): TransformerEncoder(
    (pos_conv): Sequential(
      (0): Conv1d(768, 768, kernel_size=(128,), stride=(1,), padding=(64,), groups=16)
      (1): SamePad()
      (2): GELU(approximate='none')
    )
    (layers): ModuleList(
      (0-11): 12 x TransformerSentenceEncoderLayer(
        (self_attn): MultiheadAttention(
          (dropout_module): FairseqDropout()
          (k_proj): Linear(in_features=768, out_features=768, bias=True)
          (v_proj): Linear(in_features=768, out_features=768, bias=True)
          (q_proj): Linear(in_features=768, out_features=768, bias=True)
          (out_proj): Linear(in_features=768, out_features=768, bias=True)
        )
        (dropout1): Dropout(p=0.0, inplace=False)
        (dropout2): Dropout(p=0.1, inplace=False)
        (dropout3): Dropout(p=0.0, inplace=False)
        (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=768, out_features=3072, bias=True)
        (fc2): Linear(in_features=3072, out_features=768, bias=True)
        (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
      )
    )
    (layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
  )
  (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
  (final_proj): None
)
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/hydra/_internal/utils.py", line 198, in run_and_report
    return func()
  File "/usr/local/lib/python3.10/dist-packages/hydra/_internal/utils.py", line 347, in <lambda>
    lambda: hydra.run(
  File "/usr/local/lib/python3.10/dist-packages/hydra/_internal/hydra.py", line 107, in run
    return run_job(
  File "/usr/local/lib/python3.10/dist-packages/hydra/core/utils.py", line 129, in run_job
    ret.return_value = task_function(task_cfg)
  File "/content/fairseq/fairseq_cli/hydra_train.py", line 27, in hydra_main
    _hydra_main(cfg)
  File "/content/fairseq/fairseq_cli/hydra_train.py", line 56, in _hydra_main
    distributed_utils.call_main(cfg, pre_main, **kwargs)
  File "/content/fairseq/fairseq/distributed/utils.py", line 404, in call_main
    main(cfg, **kwargs)
  File "/content/fairseq/fairseq_cli/train.py", line 96, in main
    model = task.build_model(cfg.model)
  File "/content/fairseq/fairseq/tasks/audio_finetuning.py", line 254, in build_model
    model = super().build_model(model_cfg, from_checkpoint)
  File "/content/fairseq/fairseq/tasks/audio_pretraining.py", line 224, in build_model
    model = super().build_model(model_cfg, from_checkpoint)
  File "/content/fairseq/fairseq/tasks/fairseq_task.py", line 355, in build_model
    model = models.build_model(cfg, self, from_checkpoint)
  File "/content/fairseq/fairseq/models/__init__.py", line 106, in build_model
    return model.build_model(cfg, task)
  File "/content/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py", line 224, in build_model
    w2v_encoder = Wav2VecEncoder(cfg, len(task.target_dictionary))
  File "/content/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py", line 478, in __init__
    self.load_model_weights(state, model, cfg)
  File "/content/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py", line 581, in load_model_weights
    model.load_state_dict(state["model"], strict=True)
  File "/content/fairseq/fairseq/models/fairseq_model.py", line 128, in load_state_dict
    return super().load_state_dict(new_state_dict, strict)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Wav2Vec2Model:
    Missing key(s) in state_dict: "encoder.layers.0.self_attn.k_proj.weight", "encoder.layers.0.self_attn.k_proj.bias", "encoder.layers.0.self_attn.v_proj.weight", "encoder.layers.0.self_attn.v_proj.bias", "encoder.layers.0.self_attn.q_proj.weight", "encoder.layers.0.self_attn.q_proj.bias", "encoder.layers.0.self_attn.out_proj.weight", "encoder.layers.0.self_attn.out_proj.bias", "encoder.layers.0.fc1.weight", "encoder.layers.0.fc1.bias", "encoder.layers.0.fc2.weight", "encoder.layers.0.fc2.bias", "encoder.layers.1.self_attn.k_proj.weight", "encoder.layers.1.self_attn.k_proj.bias", "encoder.layers.1.self_attn.v_proj.weight", "encoder.layers.1.self_attn.v_proj.bias", "encoder.layers.1.self_attn.q_proj.weight", "encoder.layers.1.self_attn.q_proj.bias", "encoder.layers.1.self_attn.out_proj.weight", "encoder.layers.1.self_attn.out_proj.bias", "encoder.layers.1.fc1.weight", "encoder.layers.1.fc1.bias", "encoder.layers.1.fc2.weight", "encoder.layers.1.fc2.bias", "encoder.layers.2.self_attn.k_proj.weight", "encoder.layers.2.self_attn.k_proj.bias", "encoder.layers.2.self_attn.v_proj.weight", "encoder.layers.2.self_attn.v_proj.bias", "encoder.layers.2.self_attn.q_proj.weight", "encoder.layers.2.self_attn.q_proj.bias", "encoder.layers.2.self_attn.out_proj.weight", "encoder.layers.2.self_attn.out_proj.bias", "encoder.layers.2.fc1.weight", "encoder.layers.2.fc1.bias", "encoder.layers.2.fc2.weight", "encoder.layers.2.fc2.bias", "encoder.layers.3.self_attn.k_proj.weight", "encoder.layers.3.self_attn.k_proj.bias", "encoder.layers.3.self_attn.v_proj.weight", "encoder.layers.3.self_attn.v_proj.bias", "encoder.layers.3.self_attn.q_proj.weight", "encoder.layers.3.self_attn.q_proj.bias", "encoder.layers.3.self_attn.out_proj.weight", "encoder.layers.3.self_attn.out_proj.bias", "encoder.layers.3.fc1.weight", "encoder.layers.3.fc1.bias", "encoder.layers.3.fc2.weight", "encoder.layers.3.fc2.bias", "encoder.layers.4.self_attn.k_proj.weight", "encoder.layers.4.self_attn.k_proj.bias", "encoder.layers.4.self_attn.v_proj.weight", "encoder.layers.4.self_attn.v_proj.bias", "encoder.layers.4.self_attn.q_proj.weight", "encoder.layers.4.self_attn.q_proj.bias", "encoder.layers.4.self_attn.out_proj.weight", "encoder.layers.4.self_attn.out_proj.bias", "encoder.layers.4.fc1.weight", "encoder.layers.4.fc1.bias", "encoder.layers.4.fc2.weight", "encoder.layers.4.fc2.bias", "encoder.layers.5.self_attn.k_proj.weight", "encoder.layers.5.self_attn.k_proj.bias", "encoder.layers.5.self_attn.v_proj.weight", "encoder.layers.5.self_attn.v_proj.bias", "encoder.layers.5.self_attn.q_proj.weight", "encoder.layers.5.self_attn.q_proj.bias", "encoder.layers.5.self_attn.out_proj.weight", "encoder.layers.5.self_attn.out_proj.bias", "encoder.layers.5.fc1.weight", "encoder.layers.5.fc1.bias", "encoder.layers.5.fc2.weight", "encoder.layers.5.fc2.bias", "encoder.layers.6.self_attn.k_proj.weight", "encoder.layers.6.self_attn.k_proj.bias", "encoder.layers.6.self_attn.v_proj.weight", "encoder.layers.6.self_attn.v_proj.bias", "encoder.layers.6.self_attn.q_proj.weight", "encoder.layers.6.self_attn.q_proj.bias", "encoder.layers.6.self_attn.out_proj.weight", "encoder.layers.6.self_attn.out_proj.bias", "encoder.layers.6.fc1.weight", "encoder.layers.6.fc1.bias", "encoder.layers.6.fc2.weight", "encoder.layers.6.fc2.bias", "encoder.layers.7.self_attn.k_proj.weight", "encoder.layers.7.self_attn.k_proj.bias", "encoder.layers.7.self_attn.v_proj.weight", "encoder.layers.7.self_attn.v_proj.bias", "encoder.layers.7.self_attn.q_proj.weight", "encoder.layers.7.self_attn.q_proj.bias", "encoder.layers.7.self_attn.out_proj.weight", "encoder.layers.7.self_attn.out_proj.bias", "encoder.layers.7.fc1.weight", "encoder.layers.7.fc1.bias", "encoder.layers.7.fc2.weight", "encoder.layers.7.fc2.bias", "encoder.layers.8.self_attn.k_proj.weight", "encoder.layers.8.self_attn.k_proj.bias", "encoder.layers.8.self_attn.v_proj.weight", "encoder.layers.8.self_attn.v_proj.bias", "encoder.layers.8.self_attn.q_proj.weight", "encoder.layers.8.self_attn.q_proj.bias", "encoder.layers.8.self_attn.out_proj.weight", "encoder.layers.8.self_attn.out_proj.bias", "encoder.layers.8.fc1.weight", "encoder.layers.8.fc1.bias", "encoder.layers.8.fc2.weight", "encoder.layers.8.fc2.bias", "encoder.layers.9.self_attn.k_proj.weight", "encoder.layers.9.self_attn.k_proj.bias", "encoder.layers.9.self_attn.v_proj.weight", "encoder.layers.9.self_attn.v_proj.bias", "encoder.layers.9.self_attn.q_proj.weight", "encoder.layers.9.self_attn.q_proj.bias", "encoder.layers.9.self_attn.out_proj.weight", "encoder.layers.9.self_attn.out_proj.bias", "encoder.layers.9.fc1.weight", "encoder.layers.9.fc1.bias", "encoder.layers.9.fc2.weight", "encoder.layers.9.fc2.bias", "encoder.layers.10.self_attn.k_proj.weight", "encoder.layers.10.self_attn.k_proj.bias", "encoder.layers.10.self_attn.v_proj.weight", "encoder.layers.10.self_attn.v_proj.bias", "encoder.layers.10.self_attn.q_proj.weight", "encoder.layers.10.self_attn.q_proj.bias", "encoder.layers.10.self_attn.out_proj.weight", "encoder.layers.10.self_attn.out_proj.bias", "encoder.layers.10.fc1.weight", "encoder.layers.10.fc1.bias", "encoder.layers.10.fc2.weight", "encoder.layers.10.fc2.bias", "encoder.layers.11.self_attn.k_proj.weight", "encoder.layers.11.self_attn.k_proj.bias", "encoder.layers.11.self_attn.v_proj.weight", "encoder.layers.11.self_attn.v_proj.bias", "encoder.layers.11.self_attn.q_proj.weight", "encoder.layers.11.self_attn.q_proj.bias", "encoder.layers.11.self_attn.out_proj.weight", "encoder.layers.11.self_attn.out_proj.bias", "encoder.layers.11.fc1.weight", "encoder.layers.11.fc1.bias", "encoder.layers.11.fc2.weight", "encoder.layers.11.fc2.bias". 
    Unexpected key(s) in state_dict: "encoder.layers.0.ffn1.layer_norm.weight", "encoder.layers.0.ffn1.layer_norm.bias", "encoder.layers.0.ffn1.w_1.weight", "encoder.layers.0.ffn1.w_1.bias", "encoder.layers.0.ffn1.w_2.weight", "encoder.layers.0.ffn1.w_2.bias", "encoder.layers.0.conv_module.layer_norm.weight", "encoder.layers.0.conv_module.layer_norm.bias", "encoder.layers.0.conv_module.pointwise_conv1.weight", "encoder.layers.0.conv_module.depthwise_conv.weight", "encoder.layers.0.conv_module.batch_norm.weight", "encoder.layers.0.conv_module.batch_norm.bias", "encoder.layers.0.conv_module.batch_norm.running_mean", "encoder.layers.0.conv_module.batch_norm.running_var", "encoder.layers.0.conv_module.batch_norm.num_batches_tracked", "encoder.layers.0.conv_module.pointwise_conv2.weight", "encoder.layers.0.ffn2.layer_norm.weight", "encoder.layers.0.ffn2.layer_norm.bias", "encoder.layers.0.ffn2.w_1.weight", "encoder.layers.0.ffn2.w_1.bias", "encoder.layers.0.ffn2.w_2.weight", "encoder.layers.0.ffn2.w_2.bias", "encoder.layers.0.self_attn.pos_bias_u", "encoder.layers.0.self_attn.pos_bias_v", "encoder.layers.0.self_attn.linear_q.weight", "encoder.layers.0.self_attn.linear_q.bias", "encoder.layers.0.self_attn.linear_k.weight", "encoder.layers.0.self_attn.linear_k.bias", "encoder.layers.0.self_attn.linear_v.weight", "encoder.layers.0.self_attn.linear_v.bias", "encoder.layers.0.self_attn.linear_out.weight", "encoder.layers.0.self_attn.linear_out.bias", "encoder.layers.0.self_attn.linear_pos.weight", "encoder.layers.1.ffn1.layer_norm.weight", "encoder.layers.1.ffn1.layer_norm.bias", "encoder.layers.1.ffn1.w_1.weight", "encoder.layers.1.ffn1.w_1.bias", "encoder.layers.1.ffn1.w_2.weight", "encoder.layers.1.ffn1.w_2.bias", "encoder.layers.1.conv_module.layer_norm.weight", "encoder.layers.1.conv_module.layer_norm.bias", "encoder.layers.1.conv_module.pointwise_conv1.weight", "encoder.layers.1.conv_module.depthwise_conv.weight", "encoder.layers.1.conv_module.batch_norm.weight", "encoder.layers.1.conv_module.batch_norm.bias", "encoder.layers.1.conv_module.batch_norm.running_mean", "encoder.layers.1.conv_module.batch_norm.running_var", "encoder.layers.1.conv_module.batch_norm.num_batches_tracked", "encoder.layers.1.conv_module.pointwise_conv2.weight", "encoder.layers.1.ffn2.layer_norm.weight", "encoder.layers.1.ffn2.layer_norm.bias", "encoder.layers.1.ffn2.w_1.weight", "encoder.layers.1.ffn2.w_1.bias", "encoder.layers.1.ffn2.w_2.weight", "encoder.layers.1.ffn2.w_2.bias", "encoder.layers.1.self_attn.pos_bias_u", "encoder.layers.1.self_attn.pos_bias_v", "encoder.layers.1.self_attn.linear_q.weight", "encoder.layers.1.self_attn.linear_q.bias", "encoder.layers.1.self_attn.linear_k.weight", "encoder.layers.1.self_attn.linear_k.bias", "encoder.layers.1.self_attn.linear_v.weight", "encoder.layers.1.self_attn.linear_v.bias", "encoder.layers.1.self_attn.linear_out.weight", "encoder.layers.1.self_attn.linear_out.bias", "encoder.layers.1.self_attn.linear_pos.weight", "encoder.layers.2.ffn1.layer_norm.weight", "encoder.layers.2.ffn1.layer_norm.bias", "encoder.layers.2.ffn1.w_1.weight", "encoder.layers.2.ffn1.w_1.bias", "encoder.layers.2.ffn1.w_2.weight", "encoder.layers.2.ffn1.w_2.bias", "encoder.layers.2.conv_module.layer_norm.weight", "encoder.layers.2.conv_module.layer_norm.bias", "encoder.layers.2.conv_module.pointwise_conv1.weight", "encoder.layers.2.conv_module.depthwise_conv.weight", "encoder.layers.2.conv_module.batch_norm.weight", "encoder.layers.2.conv_module.batch_norm.bias", "encoder.layers.2.conv_module.batch_norm.running_mean", "encoder.layers.2.conv_module.batch_norm.running_var", "encoder.layers.2.conv_module.batch_norm.num_batches_tracked", "encoder.layers.2.conv_module.pointwise_conv2.weight", "encoder.layers.2.ffn2.layer_norm.weight", "encoder.layers.2.ffn2.layer_norm.bias", "encoder.layers.2.ffn2.w_1.weight", "encoder.layers.2.ffn2.w_1.bias", "encoder.layers.2.ffn2.w_2.weight", "encoder.layers.2.ffn2.w_2.bias", "encoder.layers.2.self_attn.pos_bias_u", "encoder.layers.2.self_attn.pos_bias_v", "encoder.layers.2.self_attn.linear_q.weight", "encoder.layers.2.self_attn.linear_q.bias", "encoder.layers.2.self_attn.linear_k.weight", "encoder.layers.2.self_attn.linear_k.bias", "encoder.layers.2.self_attn.linear_v.weight", "encoder.layers.2.self_attn.linear_v.bias", "encoder.layers.2.self_attn.linear_out.weight", "encoder.layers.2.self_attn.linear_out.bias", "encoder.layers.2.self_attn.linear_pos.weight", "encoder.layers.3.ffn1.layer_norm.weight", "encoder.layers.3.ffn1.layer_norm.bias", "encoder.layers.3.ffn1.w_1.weight", "encoder.layers.3.ffn1.w_1.bias", "encoder.layers.3.ffn1.w_2.weight", "encoder.layers.3.ffn1.w_2.bias", "encoder.layers.3.conv_module.layer_norm.weight", "encoder.layers.3.conv_module.layer_norm.bias", "encoder.layers.3.conv_module.pointwise_conv1.weight", "encoder.layers.3.conv_module.depthwise_conv.weight", "encoder.layers.3.conv_module.batch_norm.weight", "encoder.layers.3.conv_module.batch_norm.bias", "encoder.layers.3.conv_module.batch_norm.running_mean", "encoder.layers.3.conv_module.batch_norm.running_var", "encoder.layers.3.conv_module.batch_norm.num_batches_tracked", "encoder.layers.3.conv_module.pointwise_conv2.weight", "encoder.layers.3.ffn2.layer_norm.weight", "encoder.layers.3.ffn2.layer_norm.bias", "encoder.layers.3.ffn2.w_1.weight", "encoder.layers.3.ffn2.w_1.bias", "encoder.layers.3.ffn2.w_2.weight", "encoder.layers.3.ffn2.w_2.bias", "encoder.layers.3.self_attn.pos_bias_u", "encoder.layers.3.self_attn.pos_bias_v", "encoder.layers.3.self_attn.linear_q.weight", "encoder.layers.3.self_attn.linear_q.bias", "encoder.layers.3.self_attn.linear_k.weight", "encoder.layers.3.self_attn.linear_k.bias", "encoder.layers.3.self_attn.linear_v.weight", "encoder.layers.3.self_attn.linear_v.bias", "encoder.layers.3.self_attn.linear_out.weight", "encoder.layers.3.self_attn.linear_out.bias", "encoder.layers.3.self_attn.linear_pos.weight", "encoder.layers.4.ffn1.layer_norm.weight", "encoder.layers.4.ffn1.layer_norm.bias", "encoder.layers.4.ffn1.w_1.weight", "encoder.layers.4.ffn1.w_1.bias", "encoder.layers.4.ffn1.w_2.weight", "encoder.layers.4.ffn1.w_2.bias", "encoder.layers.4.conv_module.layer_norm.weight", "encoder.layers.4.conv_module.layer_norm.bias", "encoder.layers.4.conv_module.pointwise_conv1.weight", "encoder.layers.4.conv_module.depthwise_conv.weight", "encoder.layers.4.conv_module.batch_norm.weight", "encoder.layers.4.conv_module.batch_norm.bias", "encoder.layers.4.conv_module.batch_norm.running_mean", "encoder.layers.4.conv_module.batch_norm.running_var", "encoder.layers.4.conv_module.batch_norm.num_batches_tracked", "encoder.layers.4.conv_module.pointwise_conv2.weight", "encoder.layers.4.ffn2.layer_norm.weight", "encoder.layers.4.ffn2.layer_norm.bias", "encoder.layers.4.ffn2.w_1.weight", "encoder.layers.4.ffn2.w_1.bias", "encoder.layers.4.ffn2.w_2.weight", "encoder.layers.4.ffn2.w_2.bias", "encoder.layers.4.self_attn.pos_bias_u", "encoder.layers.4.self_attn.pos_bias_v", "encoder.layers.4.self_attn.linear_q.weight", "encoder.layers.4.self_attn.linear_q.bias", "encoder.layers.4.self_attn.linear_k.weight", "encoder.layers.4.self_attn.linear_k.bias", "encoder.layers.4.self_attn.linear_v.weight", "encoder.layers.4.self_attn.linear_v.bias", "encoder.layers.4.self_attn.linear_out.weight", "encoder.layers.4.self_attn.linear_out.bias", "encoder.layers.4.self_attn.linear_pos.weight", "encoder.layers.5.ffn1.layer_norm.weight", "encoder.layers.5.ffn1.layer_norm.bias", "encoder.layers.5.ffn1.w_1.weight", "encoder.layers.5.ffn1.w_1.bias", "encoder.layers.5.ffn1.w_2.weight", "encoder.layers.5.ffn1.w_2.bias", "encoder.layers.5.conv_module.layer_norm.weight", "encoder.layers.5.conv_module.layer_norm.bias", "encoder.layers.5.conv_module.pointwise_conv1.weight", "encoder.layers.5.conv_module.depthwise_conv.weight", "encoder.layers.5.conv_module.batch_norm.weight", "encoder.layers.5.conv_module.batch_norm.bias", "encoder.layers.5.conv_module.batch_norm.running_mean", "encoder.layers.5.conv_module.batch_norm.running_var", "encoder.layers.5.conv_module.batch_norm.num_batches_tracked", "encoder.layers.5.conv_module.pointwise_conv2.weight", "encoder.layers.5.ffn2.layer_norm.weight", "encoder.layers.5.ffn2.layer_norm.bias", "encoder.layers.5.ffn2.w_1.weight", "encoder.layers.5.ffn2.w_1.bias", "encoder.layers.5.ffn2.w_2.weight", "encoder.layers.5.ffn2.w_2.bias", "encoder.layers.5.self_attn.pos_bias_u", "encoder.layers.5.self_attn.pos_bias_v", "encoder.layers.5.self_attn.linear_q.weight", "encoder.layers.5.self_attn.linear_q.bias", "encoder.layers.5.self_attn.linear_k.weight", "encoder.layers.5.self_attn.linear_k.bias", "encoder.layers.5.self_attn.linear_v.weight", "encoder.layers.5.self_attn.linear_v.bias", "encoder.layers.5.self_attn.linear_out.weight", "encoder.layers.5.self_attn.linear_out.bias", "encoder.layers.5.self_attn.linear_pos.weight", "encoder.layers.6.ffn1.layer_norm.weight", "encoder.layers.6.ffn1.layer_norm.bias", "encoder.layers.6.ffn1.w_1.weight", "encoder.layers.6.ffn1.w_1.bias", "encoder.layers.6.ffn1.w_2.weight", "encoder.layers.6.ffn1.w_2.bias", "encoder.layers.6.conv_module.layer_norm.weight", "encoder.layers.6.conv_module.layer_norm.bias", "encoder.layers.6.conv_module.pointwise_conv1.weight", "encoder.layers.6.conv_module.depthwise_conv.weight", "encoder.layers.6.conv_module.batch_norm.weight", "encoder.layers.6.conv_module.batch_norm.bias", "encoder.layers.6.conv_module.batch_norm.running_mean", "encoder.layers.6.conv_module.batch_norm.running_var", "encoder.layers.6.conv_module.batch_norm.num_batches_tracked", "encoder.layers.6.conv_module.pointwise_conv2.weight", "encoder.layers.6.ffn2.layer_norm.weight", "encoder.layers.6.ffn2.layer_norm.bias", "encoder.layers.6.ffn2.w_1.weight", "encoder.layers.6.ffn2.w_1.bias", "encoder.layers.6.ffn2.w_2.weight", "encoder.layers.6.ffn2.w_2.bias", "encoder.layers.6.self_attn.pos_bias_u", "encoder.layers.6.self_attn.pos_bias_v", "encoder.layers.6.self_attn.linear_q.weight", "encoder.layers.6.self_attn.linear_q.bias", "encoder.layers.6.self_attn.linear_k.weight", "encoder.layers.6.self_attn.linear_k.bias", "encoder.layers.6.self_attn.linear_v.weight", "encoder.layers.6.self_attn.linear_v.bias", "encoder.layers.6.self_attn.linear_out.weight", "encoder.layers.6.self_attn.linear_out.bias", "encoder.layers.6.self_attn.linear_pos.weight", "encoder.layers.7.ffn1.layer_norm.weight", "encoder.layers.7.ffn1.layer_norm.bias", "encoder.layers.7.ffn1.w_1.weight", "encoder.layers.7.ffn1.w_1.bias", "encoder.layers.7.ffn1.w_2.weight", "encoder.layers.7.ffn1.w_2.bias", "encoder.layers.7.conv_module.layer_norm.weight", "encoder.layers.7.conv_module.layer_norm.bias", "encoder.layers.7.conv_module.pointwise_conv1.weight", "encoder.layers.7.conv_module.depthwise_conv.weight", "encoder.layers.7.conv_module.batch_norm.weight", "encoder.layers.7.conv_module.batch_norm.bias", "encoder.layers.7.conv_module.batch_norm.running_mean", "encoder.layers.7.conv_module.batch_norm.running_var", "encoder.layers.7.conv_module.batch_norm.num_batches_tracked", "encoder.layers.7.conv_module.pointwise_conv2.weight", "encoder.layers.7.ffn2.layer_norm.weight", "encoder.layers.7.ffn2.layer_norm.bias", "encoder.layers.7.ffn2.w_1.weight", "encoder.layers.7.ffn2.w_1.bias", "encoder.layers.7.ffn2.w_2.weight", "encoder.layers.7.ffn2.w_2.bias", "encoder.layers.7.self_attn.pos_bias_u", "encoder.layers.7.self_attn.pos_bias_v", "encoder.layers.7.self_attn.linear_q.weight", "encoder.layers.7.self_attn.linear_q.bias", "encoder.layers.7.self_attn.linear_k.weight", "encoder.layers.7.self_attn.linear_k.bias", "encoder.layers.7.self_attn.linear_v.weight", "encoder.layers.7.self_attn.linear_v.bias", "encoder.layers.7.self_attn.linear_out.weight", "encoder.layers.7.self_attn.linear_out.bias", "encoder.layers.7.self_attn.linear_pos.weight", "encoder.layers.8.ffn1.layer_norm.weight", "encoder.layers.8.ffn1.layer_norm.bias", "encoder.layers.8.ffn1.w_1.weight", "encoder.layers.8.ffn1.w_1.bias", "encoder.layers.8.ffn1.w_2.weight", "encoder.layers.8.ffn1.w_2.bias", "encoder.layers.8.conv_module.layer_norm.weight", "encoder.layers.8.conv_module.layer_norm.bias", "encoder.layers.8.conv_module.pointwise_conv1.weight", "encoder.layers.8.conv_module.depthwise_conv.weight", "encoder.layers.8.conv_module.batch_norm.weight", "encoder.layers.8.conv_module.batch_norm.bias", "encoder.layers.8.conv_module.batch_norm.running_mean", "encoder.layers.8.conv_module.batch_norm.running_var", "encoder.layers.8.conv_module.batch_norm.num_batches_tracked", "encoder.layers.8.conv_module.pointwise_conv2.weight", "encoder.layers.8.ffn2.layer_norm.weight", "encoder.layers.8.ffn2.layer_norm.bias", "encoder.layers.8.ffn2.w_1.weight", "encoder.layers.8.ffn2.w_1.bias", "encoder.layers.8.ffn2.w_2.weight", "encoder.layers.8.ffn2.w_2.bias", "encoder.layers.8.self_attn.pos_bias_u", "encoder.layers.8.self_attn.pos_bias_v", "encoder.layers.8.self_attn.linear_q.weight", "encoder.layers.8.self_attn.linear_q.bias", "encoder.layers.8.self_attn.linear_k.weight", "encoder.layers.8.self_attn.linear_k.bias", "encoder.layers.8.self_attn.linear_v.weight", "encoder.layers.8.self_attn.linear_v.bias", "encoder.layers.8.self_attn.linear_out.weight", "encoder.layers.8.self_attn.linear_out.bias", "encoder.layers.8.self_attn.linear_pos.weight", "encoder.layers.9.ffn1.layer_norm.weight", "encoder.layers.9.ffn1.layer_norm.bias", "encoder.layers.9.ffn1.w_1.weight", "encoder.layers.9.ffn1.w_1.bias", "encoder.layers.9.ffn1.w_2.weight", "encoder.layers.9.ffn1.w_2.bias", "encoder.layers.9.conv_module.layer_norm.weight", "encoder.layers.9.conv_module.layer_norm.bias", "encoder.layers.9.conv_module.pointwise_conv1.weight", "encoder.layers.9.conv_module.depthwise_conv.weight", "encoder.layers.9.conv_module.batch_norm.weight", "encoder.layers.9.conv_module.batch_norm.bias", "encoder.layers.9.conv_module.batch_norm.running_mean", "encoder.layers.9.conv_module.batch_norm.running_var", "encoder.layers.9.conv_module.batch_norm.num_batches_tracked", "encoder.layers.9.conv_module.pointwise_conv2.weight", "encoder.layers.9.ffn2.layer_norm.weight", "encoder.layers.9.ffn2.layer_norm.bias", "encoder.layers.9.ffn2.w_1.weight", "encoder.layers.9.ffn2.w_1.bias", "encoder.layers.9.ffn2.w_2.weight", "encoder.layers.9.ffn2.w_2.bias", "encoder.layers.9.self_attn.pos_bias_u", "encoder.layers.9.self_attn.pos_bias_v", "encoder.layers.9.self_attn.linear_q.weight", "encoder.layers.9.self_attn.linear_q.bias", "encoder.layers.9.self_attn.linear_k.weight", "encoder.layers.9.self_attn.linear_k.bias", "encoder.layers.9.self_attn.linear_v.weight", "encoder.layers.9.self_attn.linear_v.bias", "encoder.layers.9.self_attn.linear_out.weight", "encoder.layers.9.self_attn.linear_out.bias", "encoder.layers.9.self_attn.linear_pos.weight", "encoder.layers.10.ffn1.layer_norm.weight", "encoder.layers.10.ffn1.layer_norm.bias", "encoder.layers.10.ffn1.w_1.weight", "encoder.layers.10.ffn1.w_1.bias", "encoder.layers.10.ffn1.w_2.weight", "encoder.layers.10.ffn1.w_2.bias", "encoder.layers.10.conv_module.layer_norm.weight", "encoder.layers.10.conv_module.layer_norm.bias", "encoder.layers.10.conv_module.pointwise_conv1.weight", "encoder.layers.10.conv_module.depthwise_conv.weight", "encoder.layers.10.conv_module.batch_norm.weight", "encoder.layers.10.conv_module.batch_norm.bias", "encoder.layers.10.conv_module.batch_norm.running_mean", "encoder.layers.10.conv_module.batch_norm.running_var", "encoder.layers.10.conv_module.batch_norm.num_batches_tracked", "encoder.layers.10.conv_module.pointwise_conv2.weight", "encoder.layers.10.ffn2.layer_norm.weight", "encoder.layers.10.ffn2.layer_norm.bias", "encoder.layers.10.ffn2.w_1.weight", "encoder.layers.10.ffn2.w_1.bias", "encoder.layers.10.ffn2.w_2.weight", "encoder.layers.10.ffn2.w_2.bias", "encoder.layers.10.self_attn.pos_bias_u", "encoder.layers.10.self_attn.pos_bias_v", "encoder.layers.10.self_attn.linear_q.weight", "encoder.layers.10.self_attn.linear_q.bias", "encoder.layers.10.self_attn.linear_k.weight", "encoder.layers.10.self_attn.linear_k.bias", "encoder.layers.10.self_attn.linear_v.weight", "encoder.layers.10.self_attn.linear_v.bias", "encoder.layers.10.self_attn.linear_out.weight", "encoder.layers.10.self_attn.linear_out.bias", "encoder.layers.10.self_attn.linear_pos.weight", "encoder.layers.11.ffn1.layer_norm.weight", "encoder.layers.11.ffn1.layer_norm.bias", "encoder.layers.11.ffn1.w_1.weight", "encoder.layers.11.ffn1.w_1.bias", "encoder.layers.11.ffn1.w_2.weight", "encoder.layers.11.ffn1.w_2.bias", "encoder.layers.11.conv_module.layer_norm.weight", "encoder.layers.11.conv_module.layer_norm.bias", "encoder.layers.11.conv_module.pointwise_conv1.weight", "encoder.layers.11.conv_module.depthwise_conv.weight", "encoder.layers.11.conv_module.batch_norm.weight", "encoder.layers.11.conv_module.batch_norm.bias", "encoder.layers.11.conv_module.batch_norm.running_mean", "encoder.layers.11.conv_module.batch_norm.running_var", "encoder.layers.11.conv_module.batch_norm.num_batches_tracked", "encoder.layers.11.conv_module.pointwise_conv2.weight", "encoder.layers.11.ffn2.layer_norm.weight", "encoder.layers.11.ffn2.layer_norm.bias", "encoder.layers.11.ffn2.w_1.weight", "encoder.layers.11.ffn2.w_1.bias", "encoder.layers.11.ffn2.w_2.weight", "encoder.layers.11.ffn2.w_2.bias", "encoder.layers.11.self_attn.pos_bias_u", "encoder.layers.11.self_attn.pos_bias_v", "encoder.layers.11.self_attn.linear_q.weight", "encoder.layers.11.self_attn.linear_q.bias", "encoder.layers.11.self_attn.linear_k.weight", "encoder.layers.11.self_attn.linear_k.bias", "encoder.layers.11.self_attn.linear_v.weight", "encoder.layers.11.self_attn.linear_v.bias", "encoder.layers.11.self_attn.linear_out.weight", "encoder.layers.11.self_attn.linear_out.bias", "encoder.layers.11.self_attn.linear_pos.weight".

The config file I am using is base_10h.yaml

Training Command: !fairseq-hydra-train task.data=/content/wav_manifest model.w2v_path=/content/checkpoint_best.pt checkpoint.save_dir=/content/output --config-dir /content/fairseq/examples/wav2vec/config/finetuning --config-name base_10h.yaml

Any suggestion here?

Thank you

BakingBrains commented 1 year ago

Any suggestions here?