IBM / transition-amr-parser

SoTA Abstract Meaning Representation (AMR) parsing with word-node alignments in Pytorch. Includes checkpoints and other tools such as statistical significance Smatch.
Apache License 2.0
237 stars 45 forks source link

ZeroDivisionError: division by zero running 'amr-parse' command #30

Closed aianta closed 1 year ago

aianta commented 2 years ago

Full trace.

amr-parse -c /home/aianta/transition-amr-parser/DATA/AMR2.0/models/amr2.0-structured-bart-large-neur-al/seed42/checkpoint_wiki.smatch_top5-avg.pt -i test_file.txt -o test_out.txt
| [en] dictionary: 34112 types
| [actions_nopos] dictionary: 12832 types
----------loading pretrained bart.large model ----------
Downloading: "https://github.com/pytorch/fairseq/archive/main.zip" to /home/aianta/.cache/torch/hub/main.zip
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3699866548/3699866548 [01:02<00:00, 59519919.52B/s]
---------- task bart rewind: loading pretrained bart.large model ----------
Using cache found in /home/aianta/.cache/torch/hub/pytorch_fairseq_main
using GPU for models
pretrained_embed:  bart.large
Using cache found in /home/aianta/.cache/torch/hub/pytorch_fairseq_main
Using bart.large extraction in GPU
Finished loading models
self.machine_config:  /home/aianta/transition-amr-parser/DATA/AMR2.0/models/amr2.0-structured-bart-large-neur-al/seed42/machine_config.json
Total time taken to load parser: 0:02:23.460215
Parsing 1 sentences
Running on batch size: 1
1
decoding:   0%|                                                                                                                                                                 | 0/1 [00:00<?, ?it/s]/home/aianta/anaconda3/envs/ibm-amr/lib/python3.7/site-packages/fairseq/search.py:140: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  beams_buf = indices_buf // vocab_size
decoding: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  2.40it/s]
Traceback (most recent call last):
  File "/home/aianta/anaconda3/envs/ibm-amr/bin/amr-parse", line 33, in <module>
    sys.exit(load_entry_point('transition-amr-parser', 'console_scripts', 'amr-parse')())
  File "/home/aianta/transition-amr-parser/transition_amr_parser/parse.py", line 756, in main
    sents_per_second = num_sent / time_secs.seconds
ZeroDivisionError: division by zero

Went to line 756 in parse.py and changed:

sents_per_second = num_sent / time_secs.seconds

to

sents_per_second = num_sent / (time_secs.seconds if time_secs.seconds != 0 else 0.1 )

And everything worked fine.

ramon-astudillo commented 1 year ago

this may be due to a missing blank line in the AMR penman file which makes the parser read 0 AMRs. The error message could more clear.