facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.38k stars 6.4k forks source link

[error][data2vec]omegaconf.errors.ConfigKeyError: Key 'cache_in_scratch' not in 'AudioFinetuningConfig' #4267

Closed Dawn-970 closed 2 years ago

Dawn-970 commented 2 years ago

🐛 Bug

To Reproduce

Steps to reproduce the behavior (always include the command you ran):

  1. Run cmd 'fairseq-hydra-train \ task.normalize=true \ common.user_dir=/mypath1 \ distributed_training.distributed_world_size=1 \ task.data=/mypath2 \ model.w2v_path=/mypath3 \ --config-dir /mypath4 \ --config-name base_100h '
  2. See error omegaconf.errors.ConfigKeyError: Key 'cache_in_scratch' not in 'AudioFinetuningConfig' full_key: cache_in_scratch reference_type=Optional[AudioFinetuningConfig] object_type=AudioFinetuningConfig

Environment

Dawn-970 commented 2 years ago

I find that I used the wrong model. The model.w2v_path should be a no_finetune model ,but I used a finetuned one.

xmz1009 commented 5 months ago

@Dawn-970 ,hi,i have the same bug,'omegaconf.errors.ConfigKeyError: Key 'rebuild_batches' not in 'AudioPretrainingConfig' full_key: rebuild_batches reference_type=Optional[AudioPretrainingConfig] object_type=AudioPretrainingConfig'', can you fix it?