emorynlp / levi-graph-amr-parser

Apache License 2.0
10 stars 1 forks source link

Out put of the model #3

Closed LucyFann closed 3 years ago

LucyFann commented 3 years ago

Hi han, first of all, thanks for your work.

  1. I wonder if your AMR node is iterative generation or one-time generation? If it is iteratively, is that Q (in your paper "Levi Graph AMR Parser using Heterogeneous Attention" section 3.2) represents the lastest graph decision?
  2. I'd also ask about how to get the AMR node and arc predict result according to the ConceptGenerator module output 'probs' and ArcGenerator module. Thanks a lot! Lucy
hankcs commented 3 years ago

Hi Lucy,

  1. Every node is iteratively generated in an autoregressive fashion. Q itself doesn't represent the latest graph decision. Q is the projection of the [CLS] token as you can see from our codes. You might read "the latest graph decision" from Cai and Lam 2019, but that's wrong since this encoder is a text encoder adopted from their codes. I'd say that's a very well written paper which sells their approach successfully, but the actual codes tell another story. The graph encoder will reflect the latest decision though, otherwise this thing won't work. Our paper basically shows that this "graph sequence iterative inference" is not necessarily better than cross-attention but consumes more parameters and decoding time.

  2. Arc and concept confidence are summed up to do beam search, see these codes: https://github.com/emorynlp/levi-graph-amr-parser/blob/484aa90ee995907002acabc260b9bdac88e6353d/amr_parser/parser.py#L145