cordercorder / nmt-multi

Codebase for multilingual neural machine translation
MIT License
13 stars 2 forks source link

some problems from train/xxxx.sh #4

Open altctrl00 opened 1 year ago

altctrl00 commented 1 year ago

When I run the xxx.sh to train the MNMT model, it shows that source-dict and target-dict are unknown args

cordercorder commented 1 year ago

Maybe this is caused by the compatibility issues between different fairseq versions.

Both --source-dict and --target-dict are command line arguments of the translation_multi_simple_epoch task in the latest fairseq of the main branch. Please see here.

Can you try a more recent version of fairseq, or use fairseq with the same commit id as us? The fairseq with commit id d3890e5 is used through our experiments.

altctrl00 commented 1 year ago

I firstly uninstall the fairseq from pip and conda,then pip install the fairseq in your project(nmt-multi/fairseq) with the command lines you offered. after that I run the xxx.sh,it returns ImportError: cannot import name 'prune_state_dict' from 'fairseq.checkpoint_utils' (/.../nmt-multi/fairseq/fairseq/checkpoint_utils.py)]

cordercorder commented 1 year ago

It is an unexpected issue.

As the prune_state_dict function is already defined in checkpoint_utils.py, (please see here), prune_state_dict should be properly imported from fairseq.checkpoint_utils.

Can you provide more detailed information about how you uninstall the previous fairseq from your environment and install the fairseq in the nmt-multi folder?

altctrl00 commented 1 year ago

May be export PYTHONPATH=/.../nmt-multi/fairseq:${PYTHONPATH} would be a solution to ImportError: cannot import name 'prune_state_dict' from 'fairseq.checkpoint_utils' (/.../nmt-multi/fairseq/fairseq/checkpoint_utils.py)] , I will try it. Thanks for your help!

cordercorder commented 1 year ago

Yes. The Python interpreter mistakenly considers the nmt-multi/fairseq folder as the fairseq package. This is due to that I just copy the entire fairseq folder into nmt-multi to prepare for open source in GitHub. Thanks for reporting this issue.

Renaming the nmt-multi/fairseq folder to nmt-multi/fairseq_dir and installing fairseq again can also solve the problem.

altctrl00 commented 1 year ago

In opus-100 training,I occur that:

2022-10-17 21:03:08 | INFO | fairseq.data.data_utils | loaded 432,231 examples from: /data1/linhz/nmt-multi/data/opus-100-corpus/preprocessed_data/main_data_bin/train.en-xh.en
2022-10-17 21:03:08 | INFO | fairseq.data.data_utils | loaded 432,231 examples from: /data1/linhz/nmt-multi/data/opus-100-corpus/preprocessed_data/main_data_bin/train.en-xh.xh
2022-10-17 21:03:08 | INFO | fairseq.data.multilingual.multilingual_data_manager | /data1/linhz/nmt-multi/data/opus-100-corpus/preprocessed_data/main_data_bin train en-xh 432231 examples
Traceback (most recent call last):
  File "/home/linhz/anaconda3/envs/fs/bin/fairseq-train", line 8, in <module>
    sys.exit(cli_main())
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq_cli/train.py", line 450, in cli_main
    distributed_utils.call_main(cfg, main)
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/distributed/utils.py", line 364, in call_main
    main(cfg, **kwargs)
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq_cli/train.py", line 126, in main
    disable_iterator_cache=task.has_sharded_data("train"),
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/checkpoint_utils.py", line 221, in load_checkpoint
    epoch=1, load_dataset=True, **passthrough_args
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/trainer.py", line 443, in get_train_iterator
    data_selector=data_selector,
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/tasks/translation_multi_simple_epoch.py", line 168, in load_dataset
    **kwargs,
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/data/multilingual/multilingual_data_manager.py", line 1126, in load_dataset
    split, training, epoch, combine, shard_epoch, **kwargs
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/data/multilingual/multilingual_data_manager.py", line 1102, in load_sampled_multi_dataset
    split, training, epoch, combine, shard_epoch=shard_epoch, **kwargs
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/data/multilingual/multilingual_data_manager.py", line 1056, in load_split_datasets
    for param in data_param_list
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/data/multilingual/multilingual_data_manager.py", line 1056, in <listcomp>
    for param in data_param_list
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/data/multilingual/multilingual_data_manager.py", line 776, in load_a_dataset
    src_langtok = self.get_encoder_langtok(src, tgt, src_langtok_spec)
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/data/multilingual/multilingual_data_manager.py", line 475, in get_encoder_langtok
    langtok, self.get_source_dictionary(src_lang) if src_lang else self.get_target_dictionary(tgt_lang)
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/data/multilingual/multilingual_data_manager.py", line 456, in get_langtok_index
    ), "cannot find language token {} in the dictionary".format(lang_tok)
AssertionError: cannot find language token __an__ in the dictionary

In ted-59 training,I occur that:

2022-10-17 21:01:59 | INFO | fairseq.data.multilingual.multilingual_data_manager | [valid] num of shards: {'main:en-hu': 1, 'main:en-eo': 1, 'main:en-es': 1, 'main:en-ka': 1, 'main:en-nb': 1, 'main:en-az': 1, 'main:en-da': 1, 'main:en-eu': 1, 'main:en-id': 1, 'main:en-cs': 1, 'main:en-sv': 1, 'main:en-sq': 1, 'main:en-be': 1, 'main:en-bs': 1, 'main:en-bn': 1, 'main:en-hi': 1, 'main:en-fr': 1, 'main:en-sk': 1, 'main:en-vi': 1, 'main:en-hy': 1, 'main:en-ro': 1, 'main:en-fa': 1, 'main:en-ko': 1, 'main:en-it': 1, 'main:en-ur': 1, 'main:en-ja': 1, 'main:en-zh': 1, 'main:en-ar': 1, 'main:en-fi': 1, 'main:en-my': 1, 'main:en-mn': 1, 'main:en-ta': 1, 'main:en-th': 1, 'main:en-el': 1, 'main:en-et': 1, 'main:en-bg': 1, 'main:en-tr': 1, 'main:en-sl': 1, 'main:en-de': 1, 'main:en-mr': 1, 'main:en-hr': 1, 'main:en-pl': 1, 'main:en-lt': 1, 'main:en-gl': 1, 'main:en-sr': 1, 'main:en-pt': 1, 'main:en-ku': 1, 'main:en-uk': 1, 'main:en-kk': 1, 'main:en-ms': 1, 'main:en-nl': 1, 'main:en-he': 1, 'main:en-ru': 1, 'main:en-mk': 1, 'main:hu-en': 1, 'main:eo-en': 1, 'main:es-en': 1, 'main:ka-en': 1, 'main:nb-en': 1, 'main:az-en': 1, 'main:da-en': 1, 'main:eu-en': 1, 'main:id-en': 1, 'main:cs-en': 1, 'main:sv-en': 1, 'main:sq-en': 1, 'main:be-en': 1, 'main:bs-en': 1, 'main:bn-en': 1, 'main:hi-en': 1, 'main:fr-en': 1, 'main:sk-en': 1, 'main:vi-en': 1, 'main:hy-en': 1, 'main:ro-en': 1, 'main:fa-en': 1, 'main:ko-en': 1, 'main:it-en': 1, 'main:ur-en': 1, 'main:ja-en': 1, 'main:zh-en': 1, 'main:ar-en': 1, 'main:fi-en': 1, 'main:my-en': 1, 'main:mn-en': 1, 'main:ta-en': 1, 'main:th-en': 1, 'main:el-en': 1, 'main:et-en': 1, 'main:bg-en': 1, 'main:tr-en': 1, 'main:sl-en': 1, 'main:de-en': 1, 'main:mr-en': 1, 'main:hr-en': 1, 'main:pl-en': 1, 'main:lt-en': 1, 'main:gl-en': 1, 'main:sr-en': 1, 'main:pt-en': 1, 'main:ku-en': 1, 'main:uk-en': 1, 'main:kk-en': 1, 'main:ms-en': 1, 'main:nl-en': 1, 'main:he-en': 1, 'main:ru-en': 1, 'main:mk-en': 1}
Traceback (most recent call last):
  File "/home/linhz/anaconda3/envs/fs/bin/fairseq-train", line 8, in <module>
    sys.exit(cli_main())
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq_cli/train.py", line 450, in cli_main
    distributed_utils.call_main(cfg, main)
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/distributed/utils.py", line 364, in call_main
    main(cfg, **kwargs)
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq_cli/train.py", line 74, in main
    task.load_dataset(valid_sub_split, combine=False, epoch=1)
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/tasks/translation_multi_simple_epoch.py", line 168, in load_dataset
    **kwargs,
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/data/multilingual/multilingual_data_manager.py", line 1126, in load_dataset
    split, training, epoch, combine, shard_epoch, **kwargs
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/data/multilingual/multilingual_data_manager.py", line 1102, in load_sampled_multi_dataset
    split, training, epoch, combine, shard_epoch=shard_epoch, **kwargs
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/data/multilingual/multilingual_data_manager.py", line 1042, in load_split_datasets
    split, epoch, shard_epoch=shard_epoch
  File "/data1/linhz/nmt-multi/fairseq_dir/fairseq/data/multilingual/multilingual_data_manager.py", line 955, in get_split_data_param_list
    paths, epoch, shard_epoch, split_num_shards_dict[key]
KeyError: 'main:en-zh_tw'
cordercorder commented 1 year ago

For OPUS-100 dataset: does the lang_dict.txt file contains the an language tag? Inserting the an language tag into the lang_dict.txt file can solve the problem.

For TED-59 dataset: it seems like there is no validation set for en-zh_tw. Has the validation set of en-zh_tw been processed through fairseq-process?

cordercorder commented 1 year ago

Maybe it is inefficient to communicate through GitHub issues. Can we communicate via WeChat or QQ?

altctrl00 commented 1 year ago

Of course,It is my QQ:410649905