InvariMint uses a DFA minimization optimization that is based on the following
equality:
min(D1 \intersect ... \intersect Dn) = min(...min(D1 \intersect D2)
...\intersect Dn)
The right hand side prevents the model from becoming too big by minimizing
intermediate models.
We need to:
(1) Test that running InvariMint with and without the optimization always
produces the same model.
(2) Evaluate the performance of this optimization and understand when it makes
sense to enable it. Does it make sense to have it be enabled by default, or is
it only worthwhile when we are intersecting more than some N DFAs (assuming
that a DFA has 4/5/6 states)?
Original issue reported on code.google.com by bestchai on 4 Mar 2012 at 1:26
Original issue reported on code.google.com by
bestchai
on 4 Mar 2012 at 1:26