Open zhichul opened 6 years ago
Hmm.. that's actually strange. Probably a bug.
Do you encounter a problem if Formulas.lambdaApply
(in JoinFn
) is wrapped inside Formulas.betaReduction
? Should that be the right behavior?
Thanks for the quick comment, and yes, wrapping Formulas.lambdaApply
(in JoinFn
) inside Formulas.betaReduction
would solve the problem. However, I'm not entirely sure whether this is a bug or whether there was a deliberate design decision behind it. Could it be an efficiency issue?
Not entirely sure. Do you observe a drop in performance and/or accuracy when betaReduction
is used?
Hi!
I'm attempting to write a CCG grammar using SEMPRE and I observed that the result
formula
produced by joining a binary and a unary withJoinFn
is not fully reduced when the binary is aLambdaFormula
. After readingJoinFn.doJoin
, from my understanding, this happens because the reduction is done usingFormulas.lambdaApply
, which would only apply once, instead of the fullFormulas.betaReduce
, which would reduce all nested lambdas. I am trying to understand the reasoning behind this design decision, could someone provide me some insight on this?Here's an example, given the following rules
(rule $noun (noun) (lambda f ((var f) (string noun))))
(rule $adj (adj) (lambda x (some_adj (var x))))
(rule $ROOT ($noun $adj) (JoinFn forward betaReduce))
If we try to parse
"noun adj"
, it would do lambdaApply once and parse to((lambda x (some_adj (var x))) (string noun))
, rather than do a full betaReduction which would apply the inner lambda again and get(some_adj (string noun))
.Again, I would be grateful if someone could explain here the reasoning behind using
lambdaApply
instead ofbetaReduction
inJoinFn.doJoin
.Cheers, Brian