IBM / transition-amr-parser

SoTA Abstract Meaning Representation (AMR) parsing with word-node alignments in Pytorch. Includes checkpoints and other tools such as statistical significance Smatch.
Apache License 2.0
231 stars 46 forks source link

tests/minimal_test.sh: RuntimeError: Function 'LogBackward' returned nan values in its 0th output. #16

Closed xsthunder closed 1 year ago

xsthunder commented 2 years ago

Env:

  1. torch.version '1.2.0a0+afb7a16'
  2. Python 3.6.9 :: Anaconda, Inc.

Solution I've tried:

  1. remove DATA/wiki25 and run tests/minimal_test.sh again
  2. tried running bash run/run_experiment.sh configs/amr2.0-action-pointer.sh and yield the same error. Can't find unzip /path/to/linkcache.zip. python preprocess/merge_files.py /path/to/LDC2017T10/data/amrs/split/ DATA/AMR2.0/corpora/ or bash run/run_experiment.sh configs/amr2.0-action-pointer.sh doesn't generate /path/to/linkcache.zip. linkcache.zip seems to be a memory cache, Am I missing something?

Full Logs:

#bash tests/minimal_test.sh
[Configuration file:]
configs/wiki25.sh
[Building oracle actions:]
[Configuration file:]
configs/wiki25.sh
Directory to aligner: DATA/wiki25/aligned/cofill/ already exists --- do nothing.
[normalize rules] months
[normalize rules] units
[normalize rules] cardinals
[normalize rules] ordinals
Reading DATA/wiki25/aligned/cofill//train.txt
25 sentences
216/293 node types/tokens
35/285 edge types/tokens
241/383 word types/tokens
Looking in indexes: https://pypi.mirrors.ustc.edu.cn/simple/
Requirement already satisfied: en_core_web_sm==2.0.0 from https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz#egg=en_core_web_sm==2.0.0 in /opt/conda/lib/python3.6/site-packages (2.0.0)

    Linking successful
    /opt/conda/lib/python3.6/site-packages/en_core_web_sm -->
    /opt/conda/lib/python3.6/site-packages/spacy/data/en

    You can now load the model via spacy.load('en')

Oracle: 25it [00:00, 111.70it/s]
Base actions:
Counter({'PRED': 59, 'RA': 20, 'LA': 19, 'ENTITY': 15, 'REDUCE': 1, 'SHIFT': 1, 'COPY_LEMMA': 1, 'COPY_SENSE01': 1, 'MERGE': 1})
Most frequent actions:
[('SHIFT', 198), ('REDUCE', 184), ('COPY_LEMMA', 66), ('MERGE', 26), ('LA(root)', 22), ('COPY_SENSE01', 20), ('LA(:ARG1)', 18), ('RA(:ARG1)', 13), ('LA(:ARG0)', 12), ('PRED(person)', 12)]
76 singleton actions
Counter({'PRED': 55, 'ENTITY': 9, 'RA': 8, 'LA': 4})
Reading DATA/wiki25/aligned/cofill//dev.txt
25 sentences
216/293 node types/tokens
35/285 edge types/tokens
241/383 word types/tokens
Oracle: 25it [00:00, 74.79it/s]
Base actions:
Counter({'PRED': 59, 'RA': 20, 'LA': 19, 'ENTITY': 15, 'REDUCE': 1, 'SHIFT': 1, 'COPY_LEMMA': 1, 'COPY_SENSE01': 1, 'MERGE': 1})
Most frequent actions:
[('SHIFT', 198), ('REDUCE', 184), ('COPY_LEMMA', 66), ('MERGE', 26), ('LA(root)', 22), ('COPY_SENSE01', 20), ('LA(:ARG1)', 18), ('RA(:ARG1)', 13), ('LA(:ARG0)', 12), ('PRED(person)', 12)]
76 singleton actions
Counter({'PRED': 55, 'ENTITY': 9, 'RA': 8, 'LA': 4})
Reading DATA/wiki25/aligned/cofill//test.txt
25 sentences
216/293 node types/tokens
35/285 edge types/tokens
241/383 word types/tokens
Oracle: 25it [00:00, 85.38it/s]
Base actions:
Counter({'PRED': 59, 'RA': 20, 'LA': 19, 'ENTITY': 15, 'REDUCE': 1, 'SHIFT': 1, 'COPY_LEMMA': 1, 'COPY_SENSE01': 1, 'MERGE': 1})
Most frequent actions:
[('SHIFT', 198), ('REDUCE', 184), ('COPY_LEMMA', 66), ('MERGE', 26), ('LA(root)', 22), ('COPY_SENSE01', 20), ('LA(:ARG1)', 18), ('RA(:ARG1)', 13), ('LA(:ARG0)', 12), ('PRED(person)', 12)]
76 singleton actions
Counter({'PRED': 55, 'ENTITY': 9, 'RA': 8, 'LA': 4})
[Preprocessing data:]
[Configuration file:]
configs/wiki25.sh
Cleaning up partially completed DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//
Namespace(alignfile=None, batch_normalize_reward=False, bert_layers=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24], bpe=None, cpu=False, criterion='cross_entropy', dataset_impl='mmap', destdir='DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//', embdir='DATA/wiki25/embeddings/RoBERTa-large-top24', fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, gold_annotations=None, gold_episode_ratio=None, joined_dictionary=False, log_format=None, log_interval=1000, lr_scheduler='fixed', machine_rules=None, machine_type=None, memory_efficient_fp16=False, min_loss_scale=0.0001, no_progress_bar=False, nwordssrc=-1, nwordstgt=-1, only_source=False, optimizer='nag', padding_factor=8, pretrained_embed='roberta.large', seed=1, source_lang='en', srcdict=None, target_lang='actions', task='amr_action_pointer_graphmp', tbmf_wrapper=False, tensorboard_logdir='', testpref='DATA/wiki25/oracles/cofill_o8.3_act-states//test', tgtdict=None, threshold_loss_scale=None, thresholdsrc=0, thresholdtgt=0, tokenizer=None, trainpref='DATA/wiki25/oracles/cofill_o8.3_act-states//train', user_dir='../fairseq_ext', validpref='DATA/wiki25/oracles/cofill_o8.3_act-states//dev', workers=1)
| [en] Dictionary: 247 types
| [en] DATA/wiki25/oracles/cofill_o8.3_act-states//train.en: 25 sents, 408 tokens, 0.0% replaced by <unk>
| [en] Dictionary: 247 types
| [en] DATA/wiki25/oracles/cofill_o8.3_act-states//dev.en: 25 sents, 408 tokens, 0.0% replaced by <unk>
| [en] Dictionary: 247 types
| [en] DATA/wiki25/oracles/cofill_o8.3_act-states//test.en: 25 sents, 408 tokens, 0.0% replaced by <unk>
----------------------------------------------------------------------------------------------------
Generate and process action states information (number of workers: 1):
[English sentence file: DATA/wiki25/oracles/cofill_o8.3_act-states//train.en]
[AMR actions file: DATA/wiki25/oracles/cofill_o8.3_act-states//train.actions]
processing ... 
finished !
Processed data saved to path with prefix: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions
Total time elapsed: 0s
----------------------------------------------------------------------------------------------------
| [actions] DATA/wiki25/oracles/cofill_o8.3_act-states//train.actions_nopos: 25 sents, 796 tokens, 0.0% replaced by <unk>
----------------------------------------------------------------------------------------------------
Generate and process action states information (number of workers: 1):
[English sentence file: DATA/wiki25/oracles/cofill_o8.3_act-states//dev.en]
[AMR actions file: DATA/wiki25/oracles/cofill_o8.3_act-states//dev.actions]
processing ... 
finished !
Processed data saved to path with prefix: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions
Total time elapsed: 0s
----------------------------------------------------------------------------------------------------
| [actions] DATA/wiki25/oracles/cofill_o8.3_act-states//dev.actions_nopos: 25 sents, 796 tokens, 0.0% replaced by <unk>
----------------------------------------------------------------------------------------------------
Generate and process action states information (number of workers: 1):
[English sentence file: DATA/wiki25/oracles/cofill_o8.3_act-states//test.en]
[AMR actions file: DATA/wiki25/oracles/cofill_o8.3_act-states//test.actions]
processing ... 
finished !
Processed data saved to path with prefix: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//test.en-actions.actions
Total time elapsed: 0s
----------------------------------------------------------------------------------------------------
| [actions] DATA/wiki25/oracles/cofill_o8.3_act-states//test.actions_nopos: 25 sents, 796 tokens, 0.0% replaced by <unk>
Using cache found in /root/.cache/torch/hub/pytorch_fairseq_master
loading archive file http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz from cache at /root/.cache/torch/pytorch_fairseq/83e3a689e28e5e4696ecb0bbb05a77355444a5c8a3437e0f736d8a564e80035e.c687083d14776c1979f3f71654febb42f2bb3d9a94ff7ebdfe1ac6748dba89d2
| dictionary: 50264 types
Using roberta.large extraction in GPU

Using cache found in /root/.cache/torch/hub/pytorch_fairseq_master
loading archive file http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz from cache at /root/.cache/torch/pytorch_fairseq/83e3a689e28e5e4696ecb0bbb05a77355444a5c8a3437e0f736d8a564e80035e.c687083d14776c1979f3f71654febb42f2bb3d9a94ff7ebdfe1ac6748dba89d2
| dictionary: 50264 types
Using roberta.large extraction in GPU

Using cache found in /root/.cache/torch/hub/pytorch_fairseq_master
loading archive file http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz from cache at /root/.cache/torch/pytorch_fairseq/83e3a689e28e5e4696ecb0bbb05a77355444a5c8a3437e0f736d8a564e80035e.c687083d14776c1979f3f71654febb42f2bb3d9a94ff7ebdfe1ac6748dba89d2
| dictionary: 50264 types
Using roberta.large extraction in GPU

| Wrote preprocessed oracle data to DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//
| Wrote preprocessed embedding data to DATA/wiki25/embeddings/RoBERTa-large-top24
[Training:]
[Configuration file:]
configs/wiki25.sh
Namespace(activation_dropout=0.0, activation_fn='relu', adam_betas='(0.9,0.98)', adam_eps=1e-08, adaptive_input=False, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, append_eos_to_target=0, apply_tgt_actnode_masks=0, apply_tgt_input_src=0, apply_tgt_src_align=1, apply_tgt_vocab_masks=1, arch='transformer_tgt_pointer_graphmp', attention_dropout=0.0, bert_backprop=False, best_checkpoint_metric='loss', bpe=None, bucket_cap_mb=25, clip_norm=0.0, collate_tgt_states=1, cpu=False, criterion='label_smoothed_cross_entropy_pointer', curriculum=0, data='DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//', dataset_impl=None, ddp_backend='c10d', decoder_attention_heads=4, decoder_embed_dim=256, decoder_embed_path=None, decoder_ffn_embed_dim=512, decoder_input_dim=256, decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, decoder_output_dim=256, device_id=0, disable_validation=False, distributed_backend='nccl', distributed_init_method=None, distributed_no_spawn=False, distributed_port=-1, distributed_rank=0, distributed_world_size=1, dropout=0.3, emb_dir='DATA/wiki25/embeddings/RoBERTa-large-top24', encode_state_machine=None, encoder_attention_heads=4, encoder_embed_dim=256, encoder_embed_path=None, encoder_ffn_embed_dim=512, encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, find_unused_parameters=False, fix_batches_to_gpus=False, fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, keep_interval_updates=-1, keep_last_epochs=6, label_smoothing=0.01, lazy_load=False, left_pad_source='True', left_pad_target='False', log_format='json', log_interval=1000, loss_coef=1.0, lr=[0.0005], lr_scheduler='inverse_sqrt', max_epoch=10, max_sentences=None, max_sentences_valid=None, max_source_positions=1024, max_target_positions=1024, max_tokens=3584, max_tokens_valid=3584, max_update=0, maximize_best_checkpoint_metric=False, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, no_bert_precompute=False, no_epoch_checkpoints=False, no_last_checkpoints=False, no_progress_bar=False, no_save=False, no_save_optimizer_state=False, no_token_positional_embeddings=False, num_workers=1, optimizer='adam', optimizer_overrides='{}', pointer_dist_decoder_selfattn_avg=0, pointer_dist_decoder_selfattn_heads=1, pointer_dist_decoder_selfattn_infer=5, pointer_dist_decoder_selfattn_layers=[5], pretrained_embed_dim=1024, raw_text=False, required_batch_size_multiple=8, reset_dataloader=False, reset_lr_scheduler=False, reset_meters=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='DATA/wiki25/models/exp_cofill_o8.3_act-states_RoBERTa-large-top24/_act-pos-grh_vmask1_shiftpos1_ptr-lay6-h1_grh-lay123-h2-allprev_1in1out_cam-layall-h2-abuf/ep10-seed42', save_interval=1, save_interval_updates=0, seed=42, sentence_avg=False, share_all_embeddings=False, share_decoder_input_output_embed=0, shift_pointer_value=1, skip_invalid_size_inputs_valid_test=False, source_lang=None, target_lang=None, task='amr_action_pointer_graphmp', tbmf_wrapper=False, tensorboard_logdir='DATA/wiki25/models/exp_cofill_o8.3_act-states_RoBERTa-large-top24/_act-pos-grh_vmask1_shiftpos1_ptr-lay6-h1_grh-lay123-h2-allprev_1in1out_cam-layall-h2-abuf/ep10-seed42', tgt_factored_emb_out=0, tgt_graph_heads=2, tgt_graph_layers=[0, 1, 2], tgt_graph_mask='allprev_1in1out', tgt_input_src_backprop=1, tgt_input_src_combine='add', tgt_input_src_emb='top', tgt_src_align_focus=['p0c1n0', 'p0c0n*'], tgt_src_align_heads=2, tgt_src_align_layers=[0, 1, 2, 3, 4, 5], threshold_loss_scale=None, tokenizer=None, train_subset='train', update_freq=[1], upsample_primary=1, use_bmuf=False, user_dir='../fairseq_ext', valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=4000, weight_decay=0.0)
| [en] dictionary: 248 types
| [actions_nopos] dictionary: 128 types
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.en
| loaded 25 examples from: DATA/wiki25/embeddings/RoBERTa-large-top24/valid.en-actions.en.bert
| loaded 25 examples from: DATA/wiki25/embeddings/RoBERTa-large-top24/valid.en-actions.en.wordpieces
| loaded 25 examples from: DATA/wiki25/embeddings/RoBERTa-large-top24/valid.en-actions.en.wp2w
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.nopos_in
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.nopos_out
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.pos
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.vocab_masks
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.src_cursors
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.actnode_masks
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.actedge_masks
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.actedge_1stnode_masks
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.actedge_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.actedge_cur_node_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.actedge_cur_1stnode_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.actedge_pre_node_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.actedge_directions
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.actedge_allpre_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.actedge_allpre_pre_node_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//valid.en-actions.actions.actedge_allpre_directions
TransformerTgtPointerGraphMPModel(
  (encoder): TransformerEncoder(
    (subspace): Linear(in_features=1024, out_features=256, bias=False)
    (embed_tokens): Embedding(248, 256, padding_idx=1)
    (embed_positions): SinusoidalPositionalEmbedding()
    (layers): ModuleList(
      (0): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (1): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (2): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (3): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (4): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (5): TransformerEncoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
    )
  )
  (decoder): TransformerDecoder(
    (embed_tokens): Embedding(128, 256, padding_idx=1)
    (embed_positions): SinusoidalPositionalEmbedding()
    (layers): ModuleList(
      (0): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (1): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (2): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (3): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (4): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
      (5): TransformerDecoderLayer(
        (self_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (self_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (encoder_attn): MultiheadAttention(
          (out_proj): Linear(in_features=256, out_features=256, bias=True)
        )
        (encoder_attn_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
        (fc1): Linear(in_features=256, out_features=512, bias=True)
        (fc2): Linear(in_features=512, out_features=256, bias=True)
        (final_layer_norm): FusedLayerNorm(torch.Size([256]), eps=1e-05, elementwise_affine=True)
      )
    )
  )
)
| model transformer_tgt_pointer_graphmp, criterion LabelSmoothedCrossEntropyPointerCriterion
| num. model params: 8298496 (num. trained: 8298496)
| training on 1 GPUs
| max tokens per GPU = 3584 and max sentences per GPU = None
| no existing checkpoint found DATA/wiki25/models/exp_cofill_o8.3_act-states_RoBERTa-large-top24/_act-pos-grh_vmask1_shiftpos1_ptr-lay6-h1_grh-lay123-h2-allprev_1in1out_cam-layall-h2-abuf/ep10-seed42/checkpoint_last.pt
| loading train data for epoch 0
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.en
| loaded 25 examples from: DATA/wiki25/embeddings/RoBERTa-large-top24/train.en-actions.en.bert
| loaded 25 examples from: DATA/wiki25/embeddings/RoBERTa-large-top24/train.en-actions.en.wordpieces
| loaded 25 examples from: DATA/wiki25/embeddings/RoBERTa-large-top24/train.en-actions.en.wp2w
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.nopos_in
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.nopos_out
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.pos
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.vocab_masks
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.src_cursors
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.actnode_masks
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.actedge_masks
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.actedge_1stnode_masks
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.actedge_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.actedge_cur_node_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.actedge_cur_1stnode_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.actedge_pre_node_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.actedge_directions
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.actedge_allpre_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.actedge_allpre_pre_node_indexes
| loaded 25 examples from: DATA/wiki25/features/cofill_o8.3_act-states_RoBERTa-large-top24//train.en-actions.actions.actedge_allpre_directions
| NOTICE: your device may support faster training with --fp16
../aten/src/ATen/native/cuda/LegacyDefinitions.cpp:14: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead.
../aten/src/ATen/native/cuda/LegacyDefinitions.cpp:14: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead.
../aten/src/ATen/native/cuda/LegacyDefinitions.cpp:14: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead.
../aten/src/ATen/native/cuda/LegacyDefinitions.cpp:14: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead.
../aten/src/ATen/native/cuda/LegacyDefinitions.cpp:14: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead.
../aten/src/ATen/native/cuda/LegacyDefinitions.cpp:14: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead.
../torch/csrc/autograd/python_anomaly_mode.cpp:57: UserWarning: Traceback of forward call that caused the error:
  File "fairseq_ext/train.py", line 338, in <module>
    cli_main()
  File "fairseq_ext/train.py", line 334, in cli_main
    main(args)
  File "fairseq_ext/train.py", line 103, in main
    train(args, trainer, task, epoch_itr)
  File "fairseq_ext/train.py", line 149, in train
    log_output = trainer.train_step(samples)
  File "/opt/conda/lib/python3.6/site-packages/fairseq/trainer.py", line 264, in train_step
    ignore_grad
  File "/workspace/transition-amr-torch03/fairseq_ext/tasks/amr_action_pointer_graphmp.py", line 463, in train_step
    loss, sample_size, logging_output = criterion(model, sample)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/workspace/transition-amr-torch03/fairseq_ext/criterions/label_smoothed_cross_entropy_pointer.py", line 106, in forward
    loss_pos, nll_loss_pos = self.compute_pointer_loss(net_output, sample, reduce=reduce)
  File "/workspace/transition-amr-torch03/fairseq_ext/criterions/label_smoothed_cross_entropy_pointer.py", line 150, in compute_pointer_loss
    attn = torch.log(attn)

Traceback (most recent call last):
  File "fairseq_ext/train.py", line 338, in <module>
    cli_main()
  File "fairseq_ext/train.py", line 334, in cli_main
    main(args)
  File "fairseq_ext/train.py", line 103, in main
    train(args, trainer, task, epoch_itr)
  File "fairseq_ext/train.py", line 149, in train
    log_output = trainer.train_step(samples)
  File "/opt/conda/lib/python3.6/site-packages/fairseq/trainer.py", line 287, in train_step
    raise e
  File "/opt/conda/lib/python3.6/site-packages/fairseq/trainer.py", line 264, in train_step
    ignore_grad
  File "/workspace/transition-amr-torch03/fairseq_ext/tasks/amr_action_pointer_graphmp.py", line 470, in train_step
    optimizer.backward(loss)
  File "/opt/conda/lib/python3.6/site-packages/fairseq/optim/fairseq_optimizer.py", line 75, in backward
    loss.backward()
  File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 118, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: Function 'LogBackward' returned nan values in its 0th output.
ramon-astudillo commented 2 years ago

can you show the output of bash tests/correctly_installed.sh I think it may be a pytorch version issue, since you get this

../aten/src/ATen/native/cuda/LegacyDefinitions.cpp:14: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead.
ramon-astudillo commented 2 years ago

for linkcache.zip see here https://github.com/IBM/transition-amr-parser/issues/17

xsthunder commented 1 year ago

removing trailing version tag sovled the RuntimeError: Function 'LogBackward' returned nan values in its 0th output. problem. e.g. '1.2.0a0+afb7a16' -> '1.2.0'

sed -i "s/a0+afb7a16//"   /opt/conda/lib/python3.6/site-packages/torch/version.py