Closed xsthunder closed 1 year ago
We also encounter some warnings during amr-parse
as show in the following. A detailed logs from terminal can be found in the attached file 20221207.apptransition-amr-parser.txt.
...
/app/transition-amr-parser/fairseq_ext/sequence_generator.py:1058: UserWarning: An output with one ormore elements was resized since it had shape [512, 1], which does not match the required output shape[510, 1].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zeroelements with t.resize_(0). (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/native/Resize.cpp:16.)
torch.gather(
/app/transition-amr-parser/fairseq_ext/sequence_generator.py:925: UserWarning: An output with one or more elements was resized since it had shape [2], which does not match the required output shape [4].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/native/Resize.cpp:16.)
...
WARNING: disconnected graphs
WARNING: disconnected graphs
...
Thank you for your time Best wishes
The warnings are because the parser sometimes is unable to attach some subgraphs, but our best systems still throw these errors.
You need indeed --no-isi
or remove the ISI alignment notation as in here.
You will also need the BLINK cache as explained in the README.m and given here. However that should have a minor effect on performance.
The examples that you show seem ok, they are tokenized already and you do not use --tokenize
and the AMR seems perfectly fine.
I would use a higher re-start for Smatch, see here but that also does not justify those numbers.
Such a low score should be clearly visible from looking at the AMR files, however. Can you compare with gold and see if you find a systematic pattern?.
Thanks a lot. succeed in reproduction.
Thanks for releasing the code and the pre-trained weights. It's very helpful. However I've having trouble in reproducing the result with the pre-trained weights released in trained-checkpoints.. I'll detail the steps and some text in the following. Thank you in advance!
my steps
aware of
--no-isi
Decode with Pre-trained model with
--no-isi
flag and compute smatchexamples from text files
input_file
line 8-9 from
output_file
corresponding prediction from input_file