transducens / LASERtrain

21 stars 5 forks source link

indices, ignored = _filter_by_size_dynamic() AttributeError: 'function' object has no attribute 'size' #2

Closed ever4244 closed 4 years ago

ever4244 commented 4 years ago

I want to run a piece of code to train the LASER multilingual translation model. the code is here:

But I encountered an error by using the training script it provides:

 File "/home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/fairseq-modules/multilingual_translation_single_model.py", line 156, in get_batch_iterator
    indices, dataset_size, max_positions, raise_exception=(not ignore_invalid_inputs),
  File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq/data/data_utils.py", line 182, in filter_by_size
    indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions)
AttributeError: 'function' object has no attribute 'size'

This is the training script I run:

fairseq-train $data_root \
    --max-epoch 4 --ddp-backend=no_c10d \
    --task multilingual_translation_singlemodel \
    --lang-pairs en-es,es-en,fr-en \
    --arch multilingual_lstm_laser_mseLearn \
    --optimizer adam --adam-betas '(0.9, 0.98)' \
    --lr 0.001 --min-lr '1e-09' --label-smoothing 0.1 \
    --criterion label_smoothed_cross_entropy_with_langid \
    --dropout 0.1 --user-dir $PWD/../LASERtrain/fairseq-modules/ --max-tokens 20  \
    --update-freq 1 --memory-efficient-fp16 \
    --save-dir checkpoints/UNv1.0_smallsample \
    --lr-scheduler inverse_sqrt --min-loss-scale '1e-09' --warmup-updates 4000 --warmup-init-lr 0.001

What have you tried?

I have tried to use other training data and I met the same problem. I tried to reduce the network size to 1/10 I also tried to training translation model with default fairseq multilingual translation model with script below:

CUDA_VISIBLE_DEVICES=0 fairseq-train $data_root \
    --max-epoch 50 \
    --ddp-backend=no_c10d \
    --task multilingual_translation --lang-pairs en-es,es-en,fr-en \
    --arch multilingual_transformer_iwslt_de_en \
    --share-decoders --share-decoder-input-output-embed \
    --optimizer adam --adam-betas '(0.9, 0.98)' \
    --lr 0.0005 --lr-scheduler inverse_sqrt --min-lr '1e-09' \
    --warmup-updates 4000 --warmup-init-lr '1e-07' \
    --label-smoothing 0.1 --criterion label_smoothed_cross_entropy \
    --dropout 0.3 --weight-decay 0.0001 \
    --save-dir checkpoints/multilingual_transformer_UNtest \
    --max-tokens 4000 \
    --update-freq 8

And it trained without any problem.

The error seems not be in the data. Can you tell me what possible error is there? Where should I start to look at? I am not very familiar with fairseq yet, so could you be so kind to point me a direction or a suspect of direction?

A full log of error is here:

Namespace(adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_softmax_cutoff='10000,50000,200000', arch='multilingual_lstm_laser_mseLearn', best_checkpoint_metric='loss', bpe=None, bucket_cap_mb=25, clip_norm=25, cpu=False, criterion='label_smoothed_cross_entropy_with_langid', curriculum=0, data='/home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample', dataset_impl=None, ddp_backend='no_c10d', decoder_attention='1', decoder_dropout_in=0.1, decoder_dropout_out=0.1, decoder_embed_dim=256, decoder_embed_path=None, decoder_freeze_embed=False, decoder_hidden_size=256, decoder_langtok=False, decoder_layers=1, decoder_out_embed_dim=512, device_id=0, disable_validation=False, distributed_backend='nccl', distributed_init_method=None, distributed_no_spawn=False, distributed_port=-1, distributed_rank=0, distributed_world_size=1, dropout=0.1, empty_cache_freq=0, encoder_bidirectional=True, encoder_dropout_in=0.1, encoder_dropout_out=0.1, encoder_embed_dim=256, encoder_embed_path=None, encoder_freeze_embed=False, encoder_hidden_size=256, encoder_langtok=None, encoder_layers=1, fast_stat_sync=False, find_unused_parameters=False, fix_batches_to_gpus=False, fixed_validation_seed=None, fp16=True, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=-1, label_smoothing=0.1, lang_embedding_size=32, lang_pairs='en-es,es-en,fr-en', lazy_load=False, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[0.001], lr_scheduler='inverse_sqrt', max_epoch=4, max_sentences=None, max_sentences_valid=None, max_source_positions=1024, max_target_positions=1024, max_tokens=20, max_tokens_valid=20, max_update=0, maximize_best_checkpoint_metric=False, memory_efficient_fp16=True, min_loss_scale=1e-09, min_lr=1e-09, no_epoch_checkpoints=False, no_last_checkpoints=False, no_progress_bar=False, no_save=False, no_save_optimizer_state=False, num_workers=1, optimizer='adam', optimizer_overrides='{}', raw_text=False, required_batch_size_multiple=8, reset_dataloader=False, reset_lr_scheduler=False, reset_meters=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints/UNv1.0_smallsample', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, share_all_embeddings=False, share_decoder_embeddings=True, share_decoder_input_output_embed=False, share_decoders=True, share_encoder_embeddings=True, share_encoders=True, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='multilingual_translation_singlemodel', tensorboard_logdir='', threshold_loss_scale=None, tokenizer=None, train_subset='train', update_freq=[1], upsample_primary=1, use_bmuf=False, user_dir='/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/../LASERtrain/fairseq-modules/', valid_subset='valid', validate_interval=1, warmup_init_lr=0.001, warmup_updates=4000, weight_decay=0.0)
| [en] dictionary: 392 types
| [es] dictionary: 392 types
| [fr] dictionary: 392 types
| loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-es.en
| loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-es.es
| /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample valid en-es 993 examples
| loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-es.es
| loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-es.en
| /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample valid es-en 993 examples
| loaded 983 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-fr.fr
| loaded 983 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-fr.en
| /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample valid fr-en 983 examples
MultilingualLSTMModelLaser(
  (encoder): LaserEncoder(
    (embed_tokens): Embedding(392, 256, padding_idx=1)
    (lstm): LSTM(256, 512, bidirectional=True)
  )
  (decoder): MultilangLSTMDecoder(
    (embed_langs): Embedding(3, 32)
    (embed_tokens): Embedding(392, 256, padding_idx=1)
    (encoder_hidden_proj): Linear(in_features=512, out_features=256, bias=True)
    (encoder_cell_proj): Linear(in_features=512, out_features=256, bias=True)
    (layers): ModuleList(
      (0): LSTMCell(1056, 256)
    )
    (fc_out): Linear(in_features=256, out_features=392, bias=True)
  )
)
| model multilingual_lstm_laser_mseLearn, criterion LabelSmoothedCrossEntropyCriterionWithLangID
| num. model params: 4963304 (num. trained: 4963304)
| training on 1 GPUs
| max tokens per GPU = 20 and max sentences per GPU = None
| no existing checkpoint found checkpoints/UNv1.0_smallsample/checkpoint_last.pt
| loading train data for epoch 0
| loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-es.en
| loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-es.es
| /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample train en-es 993 examples
| loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-es.es
| loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-es.en
| /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample train es-en 993 examples
| loaded 983 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-fr.fr
| loaded 983 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-fr.en
| /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample train fr-en 983 examples
Traceback (most recent call last):
  File "/home/wei/anaconda3/bin/fairseq-train", line 11, in <module>
    load_entry_point('fairseq', 'console_scripts', 'fairseq-train')()
  File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq_cli/train.py", line 333, in cli_main
    main(args)
  File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq_cli/train.py", line 70, in main
    extra_state, epoch_itr = checkpoint_utils.load_checkpoint(args, trainer)
  File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq/checkpoint_utils.py", line 135, in load_checkpoint
    epoch=0, load_dataset=True, **passthrough_args
  File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq/trainer.py", line 278, in get_train_iterator
    epoch=epoch,
  File "/home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/fairseq-modules/multilingual_translation_single_model.py", line 156, in get_batch_iterator
    indices, dataset_size, max_positions, raise_exception=(not ignore_invalid_inputs),
  File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq/data/data_utils.py", line 182, in filter_by_size
    indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions)
AttributeError: 'function' object has no attribute 'size'

What's your environment?

My environment:

mespla commented 4 years ago

Hi, I am not sure what the problem is, although it seems likely to be something related to the version of LASER. It is worth mentioning that this code is under development and the current version cannot be considered stable. In my experiments I have been using: Torch version: 1.1.0 LASER commit: 7b4226284a1ccacb7f05377a9a6dacba9ec1fd61 Python version: 3.6.8 Maybe you can try with these version (I would try LASER first of all).

Missatge de ever4244 notifications@github.com del dia ds., 28 de des. 2019 a les 17:41:

I want to run a piece of code to train the LASER multilingual translation model. the code is here: https://github.com/transducens/LASERtrain

But I encountered an error by using the training script it provides:

File "/home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/fairseq-modules/multilingual_translation_single_model.py", line 156, in get_batch_iterator indices, dataset_size, max_positions, raise_exception=(not ignore_invalid_inputs), File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq/data/data_utils.py", line 182, in filter_by_size indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions) AttributeError: 'function' object has no attribute 'size'

This is the training script I run:

fairseq-train $data_root \ --max-epoch 4 --ddp-backend=no_c10d \ --task multilingual_translation_singlemodel \ --lang-pairs en-es,es-en,fr-en \ --arch multilingual_lstm_laser_mseLearn \ --optimizer adam --adam-betas '(0.9, 0.98)' \ --lr 0.001 --min-lr '1e-09' --label-smoothing 0.1 \ --criterion label_smoothed_cross_entropy_with_langid \ --dropout 0.1 --user-dir $PWD/../LASERtrain/fairseq-modules/ --max-tokens 20 \ --update-freq 1 --memory-efficient-fp16 \ --save-dir checkpoints/UNv1.0_smallsample \ --lr-scheduler inverse_sqrt --min-loss-scale '1e-09' --warmup-updates 4000 --warmup-init-lr 0.001

What have you tried?

I have tried to use other training data and I met the same problem. I tried to reduce the network size to 1/10 I also tried to training translation model with default fairseq multilingual translation model with script below:

CUDA_VISIBLE_DEVICES=0 fairseq-train $data_root \ --max-epoch 50 \ --ddp-backend=no_c10d \ --task multilingual_translation --lang-pairs en-es,es-en,fr-en \ --arch multilingual_transformer_iwslt_de_en \ --share-decoders --share-decoder-input-output-embed \ --optimizer adam --adam-betas '(0.9, 0.98)' \ --lr 0.0005 --lr-scheduler inverse_sqrt --min-lr '1e-09' \ --warmup-updates 4000 --warmup-init-lr '1e-07' \ --label-smoothing 0.1 --criterion label_smoothed_cross_entropy \ --dropout 0.3 --weight-decay 0.0001 \ --save-dir checkpoints/multilingual_transformer_UNtest \ --max-tokens 4000 \ --update-freq 8

And it trained without any problem.

The error seems not be in the data. Can you tell me what possible error is there? Where should I start to look at? I am not very familiar with fairseq yet, so could you be so kind to point me a direction or a suspect of direction?

A full log of error is here:

Namespace(adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_softmax_cutoff='10000,50000,200000', arch='multilingual_lstm_laser_mseLearn', best_checkpoint_metric='loss', bpe=None, bucket_cap_mb=25, clip_norm=25, cpu=False, criterion='label_smoothed_cross_entropy_with_langid', curriculum=0, data='/home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample', dataset_impl=None, ddp_backend='no_c10d', decoder_attention='1', decoder_dropout_in=0.1, decoder_dropout_out=0.1, decoder_embed_dim=256, decoder_embed_path=None, decoder_freeze_embed=False, decoder_hidden_size=256, decoder_langtok=False, decoder_layers=1, decoder_out_embed_dim=512, device_id=0, disable_validation=False, distributed_backend='nccl', distributed_init_method=None, distributed_no_spawn=False, distributed_port=-1, distributed_rank=0, distributed_world_size=1, dropout=0.1, empty_cache_freq=0, encoder_bidirectional=True, encoder_dropout_in=0.1, encoder_dropout_out=0.1, encoder_embed_dim=256, encoder_embed_path=None, encoder_freeze_embed=False, encoder_hidden_size=256, encoder_langtok=None, encoder_layers=1, fast_stat_sync=False, find_unused_parameters=False, fix_batches_to_gpus=False, fixed_validation_seed=None, fp16=True, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=-1, label_smoothing=0.1, lang_embedding_size=32, lang_pairs='en-es,es-en,fr-en', lazy_load=False, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[0.001], lr_scheduler='inverse_sqrt', max_epoch=4, max_sentences=None, max_sentences_valid=None, max_source_positions=1024, max_target_positions=1024, max_tokens=20, max_tokens_valid=20, max_update=0, maximize_best_checkpoint_metric=False, memory_efficient_fp16=True, min_loss_scale=1e-09, min_lr=1e-09, no_epoch_checkpoints=False, no_last_checkpoints=False, no_progress_bar=False, no_save=False, no_save_optimizer_state=False, num_workers=1, optimizer='adam', optimizer_overrides='{}', raw_text=False, required_batch_size_multiple=8, reset_dataloader=False, reset_lr_scheduler=False, reset_meters=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints/UNv1.0_smallsample', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, share_all_embeddings=False, share_decoder_embeddings=True, share_decoder_input_output_embed=False, share_decoders=True, share_encoder_embeddings=True, share_encoders=True, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='multilingual_translation_singlemodel', tensorboard_logdir='', threshold_loss_scale=None, tokenizer=None, train_subset='train', update_freq=[1], upsample_primary=1, use_bmuf=False, user_dir='/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/../LASERtrain/fairseq-modules/', valid_subset='valid', validate_interval=1, warmup_init_lr=0.001, warmup_updates=4000, weight_decay=0.0) | [en] dictionary: 392 types | [es] dictionary: 392 types | [fr] dictionary: 392 types | loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-es.en | loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-es.es | /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample valid en-es 993 examples | loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-es.es | loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-es.en | /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample valid es-en 993 examples | loaded 983 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-fr.fr | loaded 983 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/valid.en-fr.en | /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample valid fr-en 983 examples MultilingualLSTMModelLaser( (encoder): LaserEncoder( (embed_tokens): Embedding(392, 256, padding_idx=1) (lstm): LSTM(256, 512, bidirectional=True) ) (decoder): MultilangLSTMDecoder( (embed_langs): Embedding(3, 32) (embed_tokens): Embedding(392, 256, padding_idx=1) (encoder_hidden_proj): Linear(in_features=512, out_features=256, bias=True) (encoder_cell_proj): Linear(in_features=512, out_features=256, bias=True) (layers): ModuleList( (0): LSTMCell(1056, 256) ) (fc_out): Linear(in_features=256, out_features=392, bias=True) ) ) | model multilingual_lstm_laser_mseLearn, criterion LabelSmoothedCrossEntropyCriterionWithLangID | num. model params: 4963304 (num. trained: 4963304) | training on 1 GPUs | max tokens per GPU = 20 and max sentences per GPU = None | no existing checkpoint found checkpoints/UNv1.0_smallsample/checkpoint_last.pt | loading train data for epoch 0 | loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-es.en | loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-es.es | /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample train en-es 993 examples | loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-es.es | loaded 993 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-es.en | /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample train es-en 993 examples | loaded 983 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-fr.fr | loaded 983 examples from: /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample/train.en-fr.en | /home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/bucc/data/UNv1.0.bpe40k-bin_sample train fr-en 983 examples Traceback (most recent call last): File "/home/wei/anaconda3/bin/fairseq-train", line 11, in load_entry_point('fairseq', 'console_scripts', 'fairseq-train')() File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq_cli/train.py", line 333, in cli_main main(args) File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq_cli/train.py", line 70, in main extra_state, epoch_itr = checkpoint_utils.load_checkpoint(args, trainer) File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq/checkpoint_utils.py", line 135, in load_checkpoint epoch=0, load_dataset=True, **passthrough_args File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq/trainer.py", line 278, in get_train_iterator epoch=epoch, File "/home/wei/LIWEI_workspace/fairseq_liweimod/LASERtrain/fairseq-modules/multilingual_translation_single_model.py", line 156, in get_batch_iterator indices, dataset_size, max_positions, raise_exception=(not ignore_invalid_inputs), File "/home/wei/LIWEI_workspace/fairseq_liweimod/fairseq/fairseq/data/data_utils.py", line 182, in filter_by_size indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions) AttributeError: 'function' object has no attribute 'size'

What's your environment?

My environment:

  • fairseq Version master
  • PyTorch Version 10.1
  • OS linux
  • How you installed fairseq conda
  • Python version: 3.7
  • CUDA/cuDNN version: 10.1
  • GPU models and configuration: 1080ti

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/transducens/LASERtrain/issues/2?email_source=notifications&email_token=ABR3ASEZJNR5A77DE2VJD3LQ256T3A5CNFSM4KAPGAP2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IDBEI4Q, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABR3ASEUWWVZTUPF7KZZGSDQ256T3ANCNFSM4KAPGAPQ .

ever4244 commented 4 years ago

Hi. Thanks. I have moved to another platform as I find the problem could be caused by a fairseq update ( I used the newer version of fairseq and ptyorch1.5) A similar problem has also occurred to me in another similar code, and it is because of a change in the newer fairseq's data loading function.

Thank you for your answer, I am closing the issue.

dilanSachi commented 4 years ago

Hi, I am facing the same issue Can you tell to which platform you moved?

dilanSachi commented 4 years ago

I found the issue. The problem is with fairseq version and pytorch version. use fairseq=0.8.0 & pytorch=1.1.0

ever4244 commented 4 years ago

Hi, I am facing the same issue Can you tell to which platform you moved?

I use https://github.com/raymondhs/fairseq-laser

dilanSachi commented 4 years ago

Hi, I am facing the same issue Can you tell to which platform you moved?

I use https://github.com/raymondhs/fairseq-laser

Thank you.