scipopt / PyGCGOpt

Python interface and modeling environment for GCG
https://scipopt.github.io/PyGCGOpt/
MIT License
12 stars 4 forks source link

Segmentation fault running GCG using `PyGCGOpt` #10

Closed aichao closed 2 years ago

aichao commented 2 years ago

Hi All,

First, I want to thank you guys for putting together a python wrapper to GCG. However, after installing SCIP Optimization Suite 8.0.0 with GCG 3.5.0 and PyGCGOpt 0.1.4, I get a segmentation fault optimizing my MIP model, see log below. The same model, encoded as a SCIP model using PySCIPOpt, runs fine. I know that this is not a lot of information to go by, but I was wondering if you can offer some advice as to how to debug.

Best, Alan

permute transformed problem using random seed 715249824
presolving:
(round 1, fast)       531 del vars, 3084 del conss, 0 add conss, 2130 chg bounds, 2129 chg sides, 1931 chg coeffs, 0 upgd conss, 100 impls, 131 clqs
(round 2, fast)       707 del vars, 3606 del conss, 0 add conss, 2137 chg bounds, 2146 chg sides, 3256 chg coeffs, 0 upgd conss, 100 impls, 70 clqs
(round 3, fast)       741 del vars, 3610 del conss, 0 add conss, 2137 chg bounds, 2146 chg sides, 3256 chg coeffs, 0 upgd conss, 100 impls, 70 clqs
(round 4, fast)       741 del vars, 3610 del conss, 0 add conss, 2177 chg bounds, 2146 chg sides, 3256 chg coeffs, 0 upgd conss, 100 impls, 70 clqs
(round 5, exhaustive) 741 del vars, 3637 del conss, 0 add conss, 2177 chg bounds, 2146 chg sides, 3256 chg coeffs, 0 upgd conss, 100 impls, 70 clqs
(round 6, exhaustive) 741 del vars, 3637 del conss, 0 add conss, 2177 chg bounds, 2146 chg sides, 3256 chg coeffs, 2694 upgd conss, 100 impls, 70 clqs
(round 7, medium)     741 del vars, 3697 del conss, 170 add conss, 2177 chg bounds, 2208 chg sides, 3366 chg coeffs, 2694 upgd conss, 2636 impls, 924 clqs
(round 8, exhaustive) 741 del vars, 3761 del conss, 173 add conss, 2177 chg bounds, 2208 chg sides, 3366 chg coeffs, 2694 upgd conss, 2636 impls, 924 clqs
(round 9, exhaustive) 741 del vars, 3761 del conss, 173 add conss, 2177 chg bounds, 2208 chg sides, 3370 chg coeffs, 5174 upgd conss, 2636 impls, 924 clqs
(round 10, medium)     1981 del vars, 3761 del conss, 173 add conss, 2177 chg bounds, 2208 chg sides, 3370 chg coeffs, 5174 upgd conss, 5263 impls, 290 clqs
(round 11, medium)     1981 del vars, 6241 del conss, 173 add conss, 2177 chg bounds, 2208 chg sides, 3370 chg coeffs, 5174 upgd conss, 5263 impls, 290 clqs
   (1.0s) probing: 1000/1541 (64.9%) - 0 fixings, 0 aggregations, 931 implications, 0 bound changes
   (1.1s) probing cycle finished: starting next cycle
(round 12, exhaustive) 1986 del vars, 6241 del conss, 173 add conss, 2177 chg bounds, 2208 chg sides, 3370 chg coeffs, 5174 upgd conss, 6582 impls, 646 clqs
(round 13, fast)       1986 del vars, 6251 del conss, 173 add conss, 2177 chg bounds, 2208 chg sides, 3370 chg coeffs, 5174 upgd conss, 6582 impls, 646 clqs
(round 14, exhaustive) 1986 del vars, 6256 del conss, 173 add conss, 2177 chg bounds, 2208 chg sides, 3373 chg coeffs, 5174 upgd conss, 6582 impls, 646 clqs
   (1.1s) probing: 55/1536 (3.6%) - 5 fixings, 0 aggregations, 1780 implications, 0 bound changes
   (1.1s) probing aborted: 75/75 successive totally useless probings
presolving (15 rounds: 15 fast, 10 medium, 7 exhaustive):
 1986 deleted vars, 6256 deleted constraints, 173 added constraints, 2177 tightened bounds, 0 added holes, 2208 changed sides, 3373 changed coefficients
 6582 implications, 646 cliques
presolved problem has 2816 variables (1536 bin, 1240 int, 0 impl, 40 cont) and 3297 constraints
   2600 constraints of type <varbound>
     10 constraints of type <knapsack>
    125 constraints of type <setppc>
      3 constraints of type <and>
    559 constraints of type <linear>
Presolving Time: 1.15
 Consclassifier "nonzeros" yields a classification with 12  different constraint classes 
 Consclassifier "constypes" yields a classification with 5 different constraint classes 
 Consclassifier "constypes according to miplib" yields a classification with 7 different constraint classes 
 Consclassifier "gamsdomain" yields a classification with 1  different constraint classes 
 Consclassifier "gamssymbols" yields a classification with 1  different constraint classes 
 Conspartition "gamssymbols" is not considered since it offers the same structure as "gamsdomain" conspartition
 Varclassifier "gamsdomain" yields a classification with 1  different variable classes 
 Varclassifier "gamssymbols" yields a classification with 1  different variable classes 
 Varpartition "gamssymbols" is not considered since it offers the same structure as "gamsdomain"
 Varclassifier "vartypes" yields a classification with 3 different variable classes
 Varclassifier "varobjvals" yields a classification with 2 different variable classes
 Varclassifier "varobjvalsigns" yields a classification with 2 different variable classes
 Varpartition "varobjvalsigns" is not considered since it offers the same structure as "varobjvals"
 Added reduced version of conspartition nonzeros with 9  different constraint classes 
 in dec_consclass: there are 5 different constraint classes   
  the current consclass distribution includes 12 classes but only 9 are allowed for propagatePartialdec() of cons class detector
 the current constraint classifier "constypes" consists of 5 different classes   
  the current constraint classifier "constypes according to miplib" consists of 7 different classes   
  the current constraint classifier "gamsdomain" consists of 1 different classes   
  the current constraint classifier "nonzeros-red-to-9" consists of 9 different classes   
 dec_consclass found 670 new partialdecs 
dec_densemasterconss found 1 new partialdec 
dec_neighborhoodmaster found 1 new partialdec 
POSTPROCESSING of decompositions. Added 160 new decomps. 
Found 820 finished decompositions.
Measured running time per detector:
Detector postprocess               worked on      160 finished decompositions and took a total time of      0.076
Detector consclass                 worked on      811 finished decompositions and took a total time of      0.174
Detector connectedbase             worked on      819 finished decompositions and took a total time of      0.689
Detector generalmastersetpack      worked on        1 finished decompositions and took a total time of      0.002
Detector varclass                  worked on        7 finished decompositions and took a total time of      0.001
Detection Time: 3.60

A Dantzig-Wolfe reformulation is applied to solve the original problem.
Chosen structure has 4 blocks, 1 master-only (static) variables, and 0 linking constraints.
This decomposition has a maxwhite score of 0.001265.
Warning: Discretization with continuous variables is only an experimental feature.

Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
jurgen-lentz commented 2 years ago

Hi Alan,

can you please provide us with your code or with a file of your problem e.g. in lp or mps format?

Best wishes, Jurgen

aichao commented 2 years ago

Hi Jurgen,

Thank you for the quick reply. Attached is my model in .mps format:

joint_model.mps.zip

The python code that generated this model is:

def create_joint_optimization_model(problem: OptimizationProblem) -> pygcgopt.Model:
    """ Create a GCG optimization model from the OptimizationProblem

    Create a GCG Model from the mathematical program:

        minimize f'x
        where Ax <= b
              lb <= x <= ub

    The decision variable `x` may be a mixture of "Integer", "Continuous", and "Binary" variables

    :param problem: OptimizationProblem
        The optimization problem.
    :return: Model
        The GCG Model
    """
    model = pygcgopt.Model("Joint Optimization")

    a_indptr = np.frombuffer(problem.a_indptr, dtype=np.int32)
    a_indices = np.frombuffer(problem.a_indices, dtype=np.int32)
    a_data = np.frombuffer(problem.a_data, dtype=float)

    start_i = 0
    start_col = 0
    x = []
    for p in problem.portfolio:
        nx = len(p.lots_ms) + len(p.lots_h)
        f = np.frombuffer(p.f, dtype=float)
        lb = np.frombuffer(p.lb, dtype=float)
        ub = np.frombuffer(p.ub, dtype=float)

        x_core = [model.addVar(vtype='I', name=('p' + p.id + '_x_core' + str(i)), lb=lb[i], ub=ub[i], obj=f[i])
                  for i in range(nx)]
        nx_ne = nx + p.ne
        x_dist = [model.addVar(vtype='C', name=('p' + p.id + '_x_dist' + str(i - nx)), lb=0., ub=model.infinity(),
                               obj=f[i])
                  for i in range(nx, nx_ne)]
        nx_ne_ns = nx_ne + p.n_slack
        x_slack = [model.addVar(vtype='C', name=('p' + p.id + '_x_slack' + str(i - nx_ne)), lb=0., ub=model.infinity(),
                                obj=f[i])
                   for i in range(nx_ne, nx_ne_ns)]
        x_bin = [model.addVar(vtype='B', name=('p' + p.id + '_x_ind' + str(i - nx_ne_ns)), lb=0, ub=1, obj=0.)
                 for i in range(nx_ne_ns, p.n)]
        x.extend(x_core + x_dist + x_slack + x_bin)

        # <= constraints
        b = np.frombuffer(p.b, dtype=float)
        for i in range(start_i, start_i + p.nc):
            start = a_indptr[i]
            end = a_indptr[i + 1]
            coeffs = a_data[start:end]
            model.addCons(pygcgopt.quicksum(
                coeff_j * x[j] for coeff_j, j in zip(coeffs, a_indices[start:end])) <= b[i - start_i])

        start_i += p.nc
        start_col += p.n

    # if joint problem, extend the decision variables to the link variables and add the complicating constraints
    if problem.HasField("b_joint"):
        x.extend([model.addVar(vtype='B', name=('joint_l_ind' + str(i)), lb=0, ub=1, obj=0.)
                  for i in range(2 * problem.n_joint_lots)])
        x.extend([model.addVar(vtype='B', name=('joint_c_ind' + str(i)), lb=0, ub=1, obj=0.)
                  for i in range(problem.n_joint_cusips)])
        b_joint = np.frombuffer(problem.b_joint, dtype=float)
        for i, b_j in enumerate(b_joint):
            start = a_indptr[start_i + i]
            end = a_indptr[start_i + i + 1]
            coeffs = a_data[start:end]
            model.addCons(pygcgopt.quicksum(coeff_j * x[j] for coeff_j, j in zip(coeffs, a_indices[start:end])) <= b_j)

    model.data = x
    model.setMinimize()

    model.writeProblem('joint_model.mps')

    return model

Here, OptimizationProblem is a protobuf message containing the f, b, lb, ub vectors as NumPy arrays and the A matrix as a scipy CSR matrix. Each "portfolio" in the OptimizationProblem contains a sub-problem, and the A matrix has the familiar block diagonal structure of sub-problems with complicating constraints at the bottom. There should be 20 such sub-problems in the attached model.

When I said earlier that the same model generated using PySCIPOpt runs fine, I mean that the above code with pyscipopt instead of pygcgopt generates a model that runs. Maybe that is a bad assumption that I can just substitute pyscipopt with pygcgopt?

Thanks again to you and the team for providing GCG and and now PyGCGOpt to the world!

With Best Regards, Alan

jurgen-lentz commented 2 years ago

Hi Alan,

one way how to solve this issue in PyGCGOpt is to detect without presolving. Therefore, you only need to call the function detect() before you optimize.

Here, you already know the block diagonal structure of your problem because of that I additionally suggest to you to specify it manually using pygcgopt, if GCG does not already identify it as the best decomposition. For more information about manually specifying a decomposition, I refer to our example in the documentation.

Yes, your assumption is right. The Model class of pygcgopt does contain all the methods of the Model class of pyscipopt because it inherits from it.

Best wishes, Jurgen

aichao commented 2 years ago

Hi Jurgen,

Thank you for the solution. Detecting without presolving works and makes sense. I will also look at your example for manually specifying the decomposition. I am closing this issue now.

Best wishes, Alan