harvardnlp / compound-pcfg

128 stars 37 forks source link

No root symbol in MAP parse trees #5

Closed adamxyang closed 4 years ago

adamxyang commented 4 years ago

The root symbol is not considered when getting the MAP parse tree both in the paper and in the code: https://github.com/harvardnlp/compound-pcfg/blob/1c0078c1be386a9c3ceab8500d7e7864aa32ca93/eval.py#L176-L189

But according to the Viterbi algorithm and the majority of golden parse trees in the treebank, there should be a root symbol (although I haven't looked at the Viterbi implementation here in PCFG.py). Why don't we have root symbol in MAP trees?

yoonkim commented 4 years ago

Great question! In our PCFG we have a separate rule for S-->NT where NT is the set of nonterminals (section 2 of the paper), so a full tree in our grammar looks like something like slide 15 of http://www.people.fas.harvard.edu/~yoonkim/data/comp-pcfg-slides.pdf. This is slightly different from the usual CNF for PCFGs.

In practice however, we found that the distribution for S-->NT becomes a one-hot distribution after training, so we can essentially replace one of the nonterminals (which has all the mass) with S. (So in some of my slides I don't show the explicit S-->NT transformation).

Note that we do not count the root-level constituent for evaluation (per convention).

Hope this helps!

adamxyang commented 4 years ago

Thanks for the quick reply and the explanation!

adamxyang commented 4 years ago

Btw, are there any particular reasons for using S->NT as root rules instead of the usual CNF form S->NT,NT?

yoonkim commented 4 years ago

Initially I was experimenting with having several latent vectors for a sentence and was hoping to learn disentangled representations through the three types of rules (i.e. S[z_1]->NT, NT[z_2]->NT NT, NT[z_3]->w). But it turned out having one latent vector was sufficient and S[z]->NT always collapsed to a one-hot distribution. I agree though that having S->NT,NT is cleaner.

adamxyang commented 4 years ago

I see, thank you!