CPMpy / cpmpy

Constraint Programming and Modeling library in Python, based on numpy, with direct solver access.
Apache License 2.0
231 stars 26 forks source link

Streamline double decomposition for alldiff #300

Open JoD opened 1 year ago

JoD commented 1 year ago

Alldiff has two decompositions: a set of pairwise variable disequalities, and a set of at-most-one constraints (one for each possible value of the values for the variables in the alldiff). Currently, the pairwise disequalities is used in decompose, the at-most-one constraints in linearize.

The pairwise disequalities have as advantage that they do not depend on the size of the domain: even for variables with large domains (say, 0..999999) the size of the decomposition remains quadratic (in the number of variables). The at-most-one has the advantage of a better linear relaxation if the integer variables will be converted to Boolean variables anyway. E.g., for the pigeonhole or sudoku problem, you very much prefer the at-most-one encoding when solving with an ILP solver (SAT solvers will decompose this at-most-one to the pairwise disequalities again).

A simple heuristic is to check the number of values in the domains of the variables. If it is larger than quadratic the number of variables, go for the pairwise disequalities decomposition. Otherwise, use the at-most-one.

IgnaceBleukx commented 1 year ago

This is probably part of the larger question of specializing decompositions of certain constraints for certain (types of) solvers. For example, the current decomposition of Circuit is very inefficient for the Gurobi solver, probably because it uses AllDifferent constraints, as well as Element constraints! But much better decompositions, specialized for MIP do exist such as the Miller-Tucker-Zemlin decomposition: https://how-to.aimms.com/Articles/332/332-Miller-Tucker-Zemlin-formulation.html

The question is, how do we implement this in a clean way? Do it in linearize? Or should the decompose() function have an argument "type" which allows the caller to chose which decomposition to use? Personally, I think linearize would be the best place to do it...

Dimosts commented 1 year ago

The above points open a more general discussion. Different decompositions are better for different types of solver. And this is constraint based. Thus, also the different decompositions should be done on the constraints classes, and not in linearize, if we decide to go with specializing decompositions of certain constraints for certain (types of) solvers. If this goes more than the Circuit constraint (and it should imo), it would not be clean to just have all in linearize.

We don't have to use a "type" argument in decompose which will not be so clean. I think that we can just have a decompose_linear(), function for decomposition in linearize. This will just call decompose() by default in the GlobalConstraint class, unless otherwise specified. Thus, for constraints that we find a better decomposition, we can implement the decompose_linear() function. Then linearize will call decompose_linear() instead of decompose().

IgnaceBleukx commented 1 year ago

Well in that case, you would need to pass to decompose_globals which type of solver you are using to make sure we call the correct decomposition function. If you don't want all decompositions in a single function, we can have helper functions "decompose_cp()", "decomposelinear", "decompose..." The main decompose() function will then simply take an argument "type" and act as a switchcase which one to call. Then we still need to pass the type of solver to decompose_globals but at least it will get rid of annoying switchcases in that function.

Dimosts commented 1 year ago

Yes, I like that direction. Should it be done in the decompose globals in tree PR or in a different one? We already have too many PRs interacting with each other open.

IgnaceBleukx commented 1 year ago

I think this is something that can be done at a later stage. Its an improvement to the code which can easily be added to decompose_globals when its merged. There is no hurry in implementing this I think.