percyliang / sempre

Semantic Parser with Execution
Other
829 stars 300 forks source link

Why does JoinFn use Formulas.lambdaApply instead of Formulas.betaReduce? #184

Open zhichul opened 6 years ago

zhichul commented 6 years ago

Hi!

I'm attempting to write a CCG grammar using SEMPRE and I observed that the result formula produced by joining a binary and a unary with JoinFn is not fully reduced when the binary is a LambdaFormula. After reading JoinFn.doJoin, from my understanding, this happens because the reduction is done using Formulas.lambdaApply, which would only apply once, instead of the full Formulas.betaReduce, which would reduce all nested lambdas. I am trying to understand the reasoning behind this design decision, could someone provide me some insight on this?

Here's an example, given the following rules (rule $noun (noun) (lambda f ((var f) (string noun)))) (rule $adj (adj) (lambda x (some_adj (var x)))) (rule $ROOT ($noun $adj) (JoinFn forward betaReduce))

If we try to parse "noun adj", it would do lambdaApply once and parse to ((lambda x (some_adj (var x))) (string noun)), rather than do a full betaReduction which would apply the inner lambda again and get (some_adj (string noun)).

Again, I would be grateful if someone could explain here the reasoning behind using lambdaApply instead of betaReduction in JoinFn.doJoin.

Cheers, Brian

ppasupat commented 6 years ago

Hmm.. that's actually strange. Probably a bug.

Do you encounter a problem if Formulas.lambdaApply (in JoinFn) is wrapped inside Formulas.betaReduction? Should that be the right behavior?

zhichul commented 6 years ago

Thanks for the quick comment, and yes, wrapping Formulas.lambdaApply (in JoinFn) inside Formulas.betaReduction would solve the problem. However, I'm not entirely sure whether this is a bug or whether there was a deliberate design decision behind it. Could it be an efficiency issue?

ppasupat commented 6 years ago

Not entirely sure. Do you observe a drop in performance and/or accuracy when betaReduction is used?