facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.22k stars 6.38k forks source link

omegaconf.errors.ConfigKeyError: str interpolation key 'model.mask_min_space' not found #3706

Open rajeevbaalwan opened 3 years ago

rajeevbaalwan commented 3 years ago

🐛 Bug

To Reproduce

Steps to reproduce the behavior (always include the command you ran):

1. Run cmd :

python fairseq-hydra-train task.data=/mnt/rajeev/speech/finetuning/750_hour model.w2v_path=/mnt/rajeev/speech/checkpoints/pretraining/hindi_pretrained_4kh.pt distributed_training.distributed_world_size=4 optimization.update_freq=[6] common.tensorboard_logdir=/mnt/rajeev/speech/logs/finetuning/750_hour/tensorboard_2021-07-07_15-48-19 checkpoint.save_dir=/mnt/rajeev/speech/checkpoints/finetuning/750_hour checkpoint.restore_file=/mnt/rajeev/speech/checkpoints/finetuning/750_hour/checkpoint_last.pt --config-dir /mnt/rajeev/speech/config/finetuning/ --config-name base_960h_seq2seq.yaml

2. See error

Traceback (most recent call last): File "/mnt/rajeev/speech3.8/bin/fairseq-hydra-train", line 33, in sys.exit(load_entry_point('fairseq', 'console_scripts', 'fairseq-hydra-train')()) File "/mnt/rajeev/speech/fairseq/fairseq_cli/hydra_train.py", line 90, in cli_main hydra_main() File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/hydra/main.py", line 32, in decorated_main _run_hydra( File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/hydra/_internal/utils.py", line 346, in _run_hydra run_and_report( File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/hydra/_internal/utils.py", line 201, in run_and_report raise ex File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/hydra/_internal/utils.py", line 198, in run_and_report return func() File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/hydra/_internal/utils.py", line 347, in lambda: hydra.run( File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 107, in run return run_job( File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/hydra/core/utils.py", line 127, in run_job ret.return_value = task_function(task_cfg) File "/mnt/rajeev/speech/fairseq/fairseq_cli/hydra_train.py", line 36, in hydra_main cfg = OmegaConf.create(OmegaConf.to_container(cfg, resolve=True, enum_to_str=True)) File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/omegaconf/omegaconf.py", line 442, in to_container return BaseContainer._to_content(cfg, resolve=resolve, enum_to_str=enum_to_str) File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 194, in _to_content retdict[key] = BaseContainer._to_content( File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/omegaconf/basecontainer.py", line 188, in _to_content node = node._dereference_node( File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/omegaconf/base.py", line 123, in _dereference_node v = parent._resolve_simple_interpolation( File "/mnt/rajeev/speech3.8/lib/python3.8/site-packages/omegaconf/base.py", line 327, in _resolve_simple_interpolation raise ConfigKeyError( omegaconf.errors.ConfigKeyError: str interpolation key 'model.mask_min_space' not found

Configuration File used for finetuning:

common: fp16: true log_format: json log_interval: 1 wandb_project: 'finetuning-seq2seq'

checkpoint: no_epoch_checkpoints: true best_checkpoint_metric: wer

task: _name: audio_pretraining data: ??? normalize: false labels: ltr autoregressive: true

dataset: num_workers: 6 max_tokens: 3200000 skip_invalid_size_inputs_valid_test: true validate_interval: 1 valid_subset: valid

valid_subset: dev_other

distributed_training: ddp_backend: legacy_ddp distributed_world_size: 8

criterion: _name: cross_entropy

optimization: max_update: 320000 lr: [0.0001] sentence_avg: true

optimizer: _name: adam adam_betas: (0.9,0.98) adam_eps: 1e-08

lr_scheduler: _name: tri_stage phase_ratio: [0.1, 0.4, 0.5] final_lr_scale: 0.05

model: _name: wav2vec_seq2seq w2v_path: ??? apply_mask: true mask_prob: 0.5 mask_channel_prob: 0.1 mask_channel_length: 64 layerdrop: 0.1 activation_dropout: 0.1 feature_grad_mult: 0.0 freeze_finetune_updates: 0

Code sample

Expected behavior

Environment

Additional context

No

793328114 commented 1 year ago

I met the same problem. Have you solved it ?