Benchmark for quadratic programming (QP) solvers available in Python.
The objective is to compare and select the best QP solvers for given use cases. The benchmarking methodology is open to discussions. Standard and community test sets are available: all of them can be processed using the qpbenchmark
command-line tool, resulting in standardized reports evaluating all metrics across all QP solvers available on the test machine.
The benchmark comes with standard and community test sets to represent different use cases for QP solvers:
New test sets are welcome! The qpbenchmark
tool is designed to make it easy to wrap up a new test set without re-implementing the benchmark methodology. Check out the contribution guidelines to get started.
Solver | Keyword | Algorithm | Matrices | License |
---|---|---|---|---|
Clarabel | clarabel |
Interior point | Sparse | Apache-2.0 |
CVXOPT | cvxopt |
Interior point | Dense | GPL-3.0 |
DAQP | daqp |
Active set | Dense | MIT |
ECOS | ecos |
Interior point | Sparse | GPL-3.0 |
Gurobi | gurobi |
Interior point | Sparse | Commercial |
HiGHS | highs |
Active set | Sparse | MIT |
HPIPM | hpipm |
Interior point | Dense | BSD-2-Clause |
MOSEK | mosek |
Interior point | Sparse | Commercial |
NPPro | nppro |
Active set | Dense | Commercial |
OSQP | osqp |
Douglas–Rachford | Sparse | Apache-2.0 |
PIQP | piqp |
Proximal Interior Point | Dense & Sparse | BSD-2-Clause |
ProxQP | proxqp |
Augmented Lagrangian | Dense & Sparse | BSD-2-Clause |
QPALM | qpalm |
Augmented Lagrangian | Sparse | LGPL-3.0 |
qpOASES | qpoases |
Active set | Dense | LGPL-2.1 |
qpSWIFT | qpswift |
Interior point | Sparse | GPL-3.0 |
quadprog | quadprog |
Goldfarb-Idnani | Dense | GPL-2.0 |
SCS | scs |
Douglas–Rachford | Sparse | MIT |
We evaluate QP solvers based on the following metrics:
Each metric (computation time, primal and dual residuals, duality gap) produces a different ranking of solvers for each problem. To aggregate those rankings into a single metric over the whole test set, we use the shifted geometric mean (shm), which is a standard to aggregate computation times in benchmarks for optimization software. This mean has the advantage of being compromised by neither large outliers (as opposed to the arithmetic mean) nor by small outliers (in contrast to the geometric geometric mean). Check out the references below for further details.
Intuitively, a solver with a shifted-geometric-mean runtime of $Y$ is $Y$ times slower than the best solver over the test set. Similarly, a solver with a shifted-geometric-mean primal residual $R$ is $R$ times less accurate on equality and inequality constraints than the best solver over the test set.
The outcome from running a test set is a standardized report comparing solvers against the different metrics. Here are the results for the various qpbenchmark
test sets:
You can check out results from a variety of machines, and share the reports produced by running the benchmark on your own machine, in the Results category of the discussions forum of each test set.
Here are some known areas of improvement for this benchmark:
Check out the issue tracker for ongoing works and future improvements.
We recommend installing the benchmark and all solvers in an isolated environment using conda
:
conda env create -f environment.yaml
conda activate qpbenchmark
Alternatively, you can install the benchmarking tool individually by pip install qpbenchmark
. In that case, the benchmark will run on all supported solvers it can import.
The benchmark works by running qpbenchmark
on a Python script describing the test set. For instance:
qpbenchmark my_test_set.py run
The test-set script is followed by a benchmark command, such as "run" here. We can add optional arguments to run a specific solver, problem, or solver settings:
qpbenchmark my_test_set.py run --solver proxqp --settings default
Check out qpbenchmark --help
for a list of available commands and arguments.
The command line ships a plot
command to compare solver performances over a test set for a specific metric. For instance, run:
qpbenchmark maros_meszaros_dense.py plot runtime high_accuracy
To generate the following plot:
Contributions to improving this benchmark are welcome. You can for instance propose new problems, or share the runtimes you obtain on your machine. Check out the contribution guidelines for details.
If you use qpbenchmark
in your works, please cite all its contributors as follows:
@software{qpbenchmark2024,
title = {{qpbenchmark: Benchmark for quadratic programming solvers available in Python}},
author = {Caron, Stéphane and Zaki, Akram and Otta, Pavel and Arnström, Daniel and Carpentier, Justin and Yang, Fengyu and Leziart, Pierre-Alexandre},
url = {https://github.com/qpsolvers/qpbenchmark},
license = {Apache-2.0},
version = {2.3.0},
year = {2024}
}
high_accuracy
settings of this benchmark.