lanl-ansi / MINLPLib.jl

A JuMP-based library of Non-Linear and Mixed-Integer Non-Linear Programs
Other
33 stars 12 forks source link

Update MINLPlib instances #40

Open bernalde opened 4 years ago

bernalde commented 4 years ago

Hi, for the upcoming INFORMS annual meeting we are presenting the results of our convex MINLP solver benchmark. We want to show some updated numbers for all the solvers. Given that, since we published the paper many things have happened with Julia, JuMP, and the solvers themselves, we plan to update the results for the Julia-based solvers (Juniper, Pavito, and include Alpine). We use the MINLPlib as problem instances. The problem library has also evolved, and many new problems are included in there. Could those new problems be uploaded to this repository? I assume you have a GAMS/NL/AMPL to JuMP translator, which makes this task less problematic. Thanks! On a related note, the GAMS MINLP problem library is no longer maintained and was moved to MINLPLib. This means that the folders minlp and minlp2 in this repository are redundant in case you want to reduce the repository size.

harshangrjn commented 4 years ago

@bernalde Yes! We haven't had a chance to update this repo much in the last year or so. But, we do have the Julia parser which we had used to convert GAMS/AMPL to JuMP (it's been some time since this was written). Here is the source in case you would be interested to add the new instances to this repo [link]. Also, I agree with merging Minlp and Minlp2 instances, which would reduce the size of this repo. We can get in touch and discuss more about this.

bernalde commented 4 years ago

Thank you for the link! It seems that it works with Julia 0.6 and that it generates models in JuMP 0.18. The (naive) question is, will these models work with Julia and JuMP latest versions (1.5 and 0.25, respectively)?

This might be too much to ask, but we had some scripts (back then @Wikunia helped us with that, thanks btw!) that would run the instances with the Julia solvers and report them using GAMS trace file format. This allowed us to compare the instances via Paver. Do you guys have something along those lines? Our study could be automatized for future benchmarks, and maybe Julia as a central hub for calling the solvers could be the best choice for it. I saw you have another repository that already provides wrappers to some MINLP solvers and there are existing tools to perform benchmarking in Julia. Let me know what you think about it.

ccoffrin commented 4 years ago

Hi @bernalde, thanks for the update on your re-evaluation effort. Indeed much has changed over the past couple of years. My two cents on this is that a comprehensive and sustainable solution to supporting MINLPLib in Julia is needed and it's a non-trivial amount of work to do properly. As Harsha indicates, this repo has not been maintained into JuMP v0.19+ version and it really should be.

Given the time line you have, you might have a look at the moi and juniper-reduced-moi. A couple of years ago we did a quick stop-gap update of this repo to JuMP v0.19+ to do some regression tests on Juniper. Although not an ideal solution, it might be able to suite your near terms needs.

If there is interested in making a proper migration of MINLPLib to a JuMP compatible format the community would surely welcome it. We may be able to provide some amount of help in such an effort. I would also point you to ConvexText.jl. I think this could serve as a nice model for testing model general MINLP solvers that are accessible from the Julia eco system at least.

bernalde commented 4 years ago

Wow thanks for the links! The branches you suggested should provide the instances in the format that should work. I think that we should settle in using JuMP 0.19 for this short-term comparison to also have the chance of including Alpine in the comparison (from here I assume it only supports JuMP up to that version). Just a quick question, in order to update the instances to JuMP 0.19 did you need to do any changes besides using the translator mentioned by Harsha? If not we could take care of translating the new instances that have been added to MINLPLib (and submitting a PR adding those here).

P.s. I think this might not be the place but I have been trying to dig into Julia/JuMP for some time but other projects have always been on my way of doing that!

ccoffrin commented 3 years ago

@harshangrjn, given that the Apline seems pretty close to be working in JuMP v0.19+ do you think @bernalde could run the latest version of JuMP for his experiments?

harshangrjn commented 3 years ago

@ccoffrin As I am typing this, I am fixing a few issues for expression parsing which has changed in JuMP v0.21+. Let me complete the testing and merge, after which @bernalde will be able to use Alpine. @bernalde now that Informs deadline is over, are you still looking at comparing against Alpine for your presentation? I remember you were looking at comparing only convex MINLP solvers though right?

bernalde commented 3 years ago

Hello guys! @harshangrjn should know that I already submitted my INFORMS talk :) We decided not to update the numbers for the Julia solvers (even with the help from @Wikunia some of the instances were failing just because of file formatting issues with the GAMS trace files that we analyze using Paver). This would have made the Julia solvers look bad, so instead of using those biased results and with the time being limited, we decided to no update the numbers. Having said that, it does not mean that we are not interested in seeing how are those solvers doing. Our work is for convex MINLP problems, but convex solvers also participate (BARON, ANTIGONE, etc...) so we really look forward to including Alpine in there. I don't know if this counts as a feature request, but if Julia solvers could generate those trace files by default it would make our lives easier!

harshangrjn commented 3 years ago

@bernalde In my opinion, the issue with using noncovex global solvers (Baron, Antigonne, Alpine, SCIP, etc.) for solving convex MINLPs (c-MINLP) lie in how well the solvers have implemented their convexity testing for objective and constraints. As you can imagine, if the solver doesn't recognize a convex constraint as convex, however good the global optimization methods may be, the solver may take for ever to converge. However, let's say that a solver like Alpine recognizes that the problem is indeed a c-MINLP, then the run time of Alpine is more or less going to be the run time of Gurobi as it passes over the processed problem to Gurobi. Of course, Gurobi for instance cannot yet handle anything more than convex MIQCQPs (now non convex problems are apparently solvable too). But Alpine would re-formulate, say a quartic c-MINLP, into a lifted MIQCQP and solve it in Gurobi. So, just sharing my thoughts on this, though you would have spent a lot more time on a fair ground for comparing these solvers. Look forward to your talk!

bernalde commented 3 years ago

You are absolutely right, the success of a global solver to efficiently solve a convex problem is to identify it is convex. On the other hand, even if it knows it is convex it might not do the best to solve the problem compared to naive solvers that assume that. To remove the negative bias that might come from a global solver not identifying the convexity of the problem, we do as much as we can to inform that to the solver. This makes the competition more on the "solver technology" than the convexity detection. Actually from this study, several global solvers introduced the option to inform the solver about the problem's convexity. You might want to consider that for Alpine. If you knew the problem was convex, what would you do differently? Which steps can you safely skip without sacrificing convergence (convexity detection is the obvious one)?