Simple-Robotics / proxsuite

The Advanced Proximal Optimization Toolbox
BSD 2-Clause "Simplified" License
414 stars 50 forks source link
cpp eigen3 linear-programming optimization proximal-algorithms quadratic-programming robotics

Proxsuite Logo

License Documentation CI - Linux/OSX/Windows - Conda PyPI version Conda version

ProxSuite is a collection of open-source, numerically robust, precise, and efficient numerical solvers (e.g., LPs, QPs, etc.) rooted in revisited primal-dual proximal algorithms. Through ProxSuite, we aim to offer the community scalable optimizers that deal with dense, sparse, or matrix-free problems. While the first targeted application is Robotics, ProxSuite can be used in other contexts without limits.

ProxSuite is actively developped and supported by the Willow and Sierra research groups, joint research teams between Inria, École Normale Supérieure de Paris and Centre National de la Recherche Scientifique localized in France.

ProxSuite is already integrated into:

We are ready to integrate ProxSuite within other optimization ecosystems.

ProxSuite main features

Proxsuite is fast:

Proxsuite is versatile, offering through a unified API advanced algorithms specialized for efficiently exploiting problem structures:

with dedicated features for

Proxsuite is flexible:

Proxsuite is extensible. Proxsuite is reliable and extensively tested, showing the best performances on the hardest problems of the literature. Proxsuite is supported and tested on Windows, Mac OS X, Unix, and Linux.

Documentation

The online ProxSuite documentation of the last release is available here.

Getting started

ProxSuite is distributed to many well-known package managers.

Quick install with :

   pip install proxsuite

This approach is available on Linux, Windows and Mac OS X.

Quick install with :

   conda install proxsuite -c conda-forge

This approach is available on Linux, Windows and Mac OS X.

Quick install with :

   brew install proxsuite

This approach is available on Linux and Mac OS X.

Alternative approaches

Installation from source is presented here.

Compiling a first example program

For the fastest performance, use the following command to enable vectorization when compiling the simple example.

g++ -O3 -march=native -DNDEBUG -std=gnu++17 -DPROXSUITE_VECTORIZE examples/first_example_dense.cpp -o first_example_dense $(pkg-config --cflags proxsuite)

Using ProxSuite with CMake

If you want to use ProxSuite with CMake, the following tiny example should help you:

cmake_minimum_required(VERSION 3.10)

project(Example CXX)
find_package(proxsuite REQUIRED)
set(CMAKE_CXX_STANDARD 17) # set(CMAKE_CXX_STANDARD 14) will work too

add_executable(example example.cpp)
target_link_libraries(example PUBLIC proxsuite::proxsuite)

# Vectorization support via SIMDE and activated by the compilation options '-march=native' or `-mavx2 -mavx512f`
add_executable(example_with_full_vectorization_support example.cpp)
target_link_libraries(example_with_full_vectorization_support PUBLIC proxsuite::proxsuite-vectorized)
target_compile_options(example_with_full_vectorization_support PUBLIC "-march=native")

If you have compiled ProxSuite with the vectorization support, you might also use the CMake target proxsuite::proxsuite-vectorized to also link against SIMDE. Don't forget to use -march=native to get the best performance.

ProxQP

The ProxQP algorithm is a numerical optimization approach for solving quadratic programming problems of the form:

$$ \begin{align} \min_{x} & ~\frac{1}{2}x^{T}Hx+g^{T}x \ \text{s.t.} & ~A x = b \ & ~l \leq C x \leq u \end{align} $$

where $x \in \mathbb{R}^n$ is the optimization variable. The objective function is defined by a positive semidefinite matrix $H \in \mathcal{S}^n+$ and a vector $g \in \mathbb{R}^n$. The linear constraints are defined by the equality-contraint matrix $A \in \mathbb{R}^{n\text{eq} \times n}$ and the inequality-constraint matrix $C \in \mathbb{R}^{n\text{in} \times n}$ and the vectors $b \in \mathbb{R}^{n\text{eq}}$, $l \in \mathbb{R}^{n\text{in}}$ and $u \in \mathbb{R}^{n\text{in}}$ so that $bi \in \mathbb{R},~ \forall i = 1,...,n\text{eq}$ and $l_i \in \mathbb{R} \cup { -\infty }$ and $ui \in \mathbb{R} \cup { +\infty }, ~\forall i = 1,...,n\text{in}$.

Citing ProxQP

If you are using ProxQP for your work, we encourage you to cite the related paper.

Numerical benchmarks

The numerical benchmarks of ProxQP against other commercial and open-source solvers are available here.

For dense Convex Quadratic Programs with inequality and equality constraints, when asking for relatively high accuracy (e.g., 1e-6), one obtains the following results.

Random Mixed QP_dense_eps_abs_1e-6

On the y-axis, you can see timings in seconds, and on the x-axis dimension wrt to the primal variable of the random Quadratic problems generated (the number of constraints of the generated problem is half the size of its primal dimension). For every dimension, the problem is generated over different seeds, and timings are obtained as averages over successive runs for the same problems. This chart shows for every benchmarked solver and random Quadratic program generated, barplot timings, including median (as a dot) and minimal and maximal values obtained (defining the amplitude of the bar). You can see that ProxQP is always below over solvers, which means it is the quickest for this test.

For hard problems from the Maros Meszaros testset, when asking for high accuracy (e.g., 1e-9), one obtains the results below.

maros_meszaros_problems_high_accuracy

The chart above reports the performance profiles of different solvers. It is classic for benchmarking solvers. Performance profiles correspond to the fraction of problems solved (on the y-axis) as a function of certain runtime (on the x-axis, measured in terms of a multiple of the runtime of the fastest solver for that problem). So the higher, the better. You can see that ProxQP solves the quickest over 60% of the problems (i.e., for $\tau=1$) and that for solving about 90% of the problems, it is at most 2 times slower than the fastest solvers solving these problems (i.e., for $\tau\approx2$).

Note: All these results have been obtained with a 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz CPU.

QPLayer

QPLayer enables to use a QP as a layer within standard learning architectures. More precisely, QPLayer differentiates over $\theta$ the primal and dual solutions of QP of the form

$$ \begin{align} \min_{x} & ~\frac{1}{2}x^{T}H(\theta)x+g(\theta)^{T}x \ \text{s.t.} & ~A(\theta) x = b(\theta) \ & ~l(\theta) \leq C(\theta) x \leq u(\theta) \end{align} $$

where $x \in \mathbb{R}^n$ is the optimization variable. The objective function is defined by a positive semidefinite matrix $H(\theta) \in \mathcal{S}^n+$ and a vector $g(\theta) \in \mathbb{R}^n$. The linear constraints are defined by the equality-contraint matrix $A(\theta) \in \mathbb{R}^{n\text{eq} \times n}$ and the inequality-constraint matrix $C(\theta) \in \mathbb{R}^{n\text{in} \times n}$ and the vectors $b \in \mathbb{R}^{n\text{eq}}$, $l(\theta) \in \mathbb{R}^{n\text{in}}$ and $u(\theta) \in \mathbb{R}^{n\text{in}}$ so that $bi \in \mathbb{R},~ \forall i = 1,...,n\text{eq}$ and $l_i \in \mathbb{R} \cup { -\infty }$ and $ui \in \mathbb{R} \cup { +\infty }, ~\forall i = 1,...,n\text{in}$.

QPLayer is able to learn more structured architectures. For example, $\theta$ can consists only in learning some elements of $A$ while letting $b$ fixed (see e.g., the example about how to include QPLayer into a learning pipeline). QPLayer can also differentiates over LPs. QPLayer allows for parallelized calculus over CPUs, and is interfaced with PyTorch.

Citing QPLayer

If you are using QPLayer for your work, we encourage you to cite the related paper.

Installation procedure

Please follow the installation procedure here.