This issue is a notice to keep track of regular problems occurring in AE during the E-matching.
At some point between 2.4.1 and 2.4.2, Mohamed introduced an optimization consisting to flatten some associative operators as addition and multiplication between integers or reals. For instance, this optimization transforms a term of the form x+(y+z) into the 3-ary sum x+y+z. Mohamed didn't specify the reason that led him to introduce this optimization.
This commit introduce a bunch of regressions, mostly reported in issue #505. The main reason is because flattening losses the lexical structure of terms and prevents the E-matching module to find candidate substitutions to instantiate lemmas.
In the original commit, Mohamed applied the transformation only in the frontend. But it doesn't apply to triggers since triggers are produced after by applying substitutions with fresh variables. Thus, you can got the term a+b+c and the trigger x+(y+z). I tried to apply the pass everywhere by putting it in the smart constructor of terms but this modification introduce a lot of reordering passes.
It seems reordering terms in n-ary sums improve a bit the situation but this trick is not a good idea. Let us image you have a term of the form f(a, a+b) and a trigger f(x, y+x). There is no order on terms to prevent the two sums to be in different orders since the terms in the sum of the triggers are variables.
I tried to generate all the permutations of sums during matching rounds. For instance, if we have the term f(a, a+b) and the trigger f(x,y+x). My patch consider the equivalent trigger f(x,x+y). The patch works well on many tests but still fail on about 1000 tests of the SMT-LIB (mostly very long tests).
The conclusion is flattening cannot work without a E-matching module able to manage properly associative operators. We didn't find a quick patch to solve this issue.
This issue is a notice to keep track of regular problems occurring in AE during the E-matching.
At some point between 2.4.1 and 2.4.2, Mohamed introduced an optimization consisting to flatten some associative operators as addition and multiplication between integers or reals. For instance, this optimization transforms a term of the form
x+(y+z)
into the 3-ary sumx+y+z
. Mohamed didn't specify the reason that led him to introduce this optimization.This commit introduce a bunch of regressions, mostly reported in issue #505. The main reason is because flattening losses the lexical structure of terms and prevents the E-matching module to find candidate substitutions to instantiate lemmas. In the original commit, Mohamed applied the transformation only in the frontend. But it doesn't apply to triggers since triggers are produced after by applying substitutions with fresh variables. Thus, you can got the term
a+b+c
and the triggerx+(y+z)
. I tried to apply the pass everywhere by putting it in the smart constructor of terms but this modification introduce a lot of reordering passes.It seems reordering terms in n-ary sums improve a bit the situation but this trick is not a good idea. Let us image you have a term of the form
f(a, a+b)
and a triggerf(x, y+x)
. There is no order on terms to prevent the two sums to be in different orders since the terms in the sum of the triggers are variables.I tried to generate all the permutations of sums during matching rounds. For instance, if we have the term
f(a, a+b)
and the triggerf(x,y+x)
. My patch consider the equivalent triggerf(x,x+y)
. The patch works well on many tests but still fail on about 1000 tests of the SMT-LIB (mostly very long tests).The conclusion is flattening cannot work without a E-matching module able to manage properly associative operators. We didn't find a quick patch to solve this issue.
We are thinking about a smarter solution.