Closed delcypher closed 8 years ago
Good example: Z3 uses the more efficient SAT core in the first case. In the second case, it does not realize that your uses of "select" are benign and can be ackermannized (@MikolasJanota) so it falls back to the general SMT core.
@NikolajBjorner Thanks for the informative answer. I'm not familiar with ackermannization. Is that "Ackermann’s reduction" in http://research.microsoft.com/en-us/um/redmond/projects/z3/smt07-slides.pdf ?
Also I take it that there is no QF_FPABV
logic (or something with the same meaning) then?
Yes, I mean Akcermann's reduction: Every occurrence of (select x0x23234 ( bv2 32)) can be replaced by a fresh constant x2, and (select x0x23234 ( bv1 32)) by x1, etc. This preserves satisfiability and more usefully turns the formula into the syntactic fragment of QF_FP that Z3 can detect.
@NikolajBjorner Okay. Thanks for clarifying.
There is no dedicated tactic for QF_FPABV yet because that combination hasn't appeared in benchmarks yet. This means that Z3 will fall back to the most general solver, which supports everything (the 'smt' tactic). When you enable verbose output, e.g., -v:10, you'll see that it says
(simplifier ... )
(smt ...)
While for the first query it says
...
(fpa2bv ...
...
(ackermannize ...
...
(sat-....
If it's the floats that break things, you can rewrite them into bit-vectors up front by rolling your own custom tactic, e.g., by replacing (check-sat)
with
(check-sat-using (then simplify fpa2bv qfaufbv))
(where qfaufbv
is whatever tactic you were using before the addition of floats).
We also have recently added tactics that rewrite bit-vector arrays into bit-vector functions which then can be ackermannized, so a first draft QF_FPABV tactic for your type of problem could be as follows:
(check-sat-using (then simplify
fpa2bv
bvarray2uf
ackermannize_bv
(using-params simplify :elim_and true)
bit-blast sat))
which solves the problem even faster than the original FP problem was solved.
@wintersteiger That's cool! Clearly I need to invest some time in learning how to use the different tactics. Your custom tactic does indeed solve the problem faster :)
A few questions about it...
(then ...)
mean here. Looking at Z3's output when given (help-tactic)
I see (and-then <tactic>+)
and (par-then <tactic1> <tactic2>)
. Would I be right in presuming that here (then ...)
means (and-then ...)
?(using-param simplify :elim_and true)
here?simplify
here with :elim_and
which is normally false by default?(help-tactic)
has (or-else)
. What does it mean for a tactic to fail? For example for simplify
if no simplifications were made is that a failure? Similarly do the other tactics have different notions of failure?(then ...)
is the same as (and-then ...)
(using-params ...)
takes 1 tactic, 1 parameter name and 1 (or more) parameter value, in other words, it runs simplify
with parameter elim_and
set to true
.bit-blast
tactic does not support everything, it relies on the simplifier to get rid of some functions first, that is why we have to run simplify
just before bit-blast
. The additional option elim_and
is enabled to make sure the simplifier also rewrites and-expressions. I'm closing this as all questions have been answered. If you hit more performance problems involving floats, please do let us know!
Consider the following queries (I think they are technically equi-satisfiable). From the perspective of the work that I'm doing I can think of them as being equivalent.
Using floating point variables (execution time < 1 second)
Using arrays of bitvectors instead of (execution time: ~ 5 seconds)
There is a noticeable performance difference between the two, does anyone know why?
I tried looking for a logic for "arrays of bitvectors and floating point arithmetic" logic in Z3's code so I could set the logic but I couldn't find one. Is there one?