Simple-Robotics / proxsuite

The Advanced Proximal Optimization Toolbox
BSD 2-Clause "Simplified" License
414 stars 50 forks source link

Type error about QPFunction with structurally infeasible #307

Closed zbzhu closed 8 months ago

zbzhu commented 9 months ago

By testing the examples in (https://github.com/Simple-Robotics/proxsuite/blob/main/examples/python/qplayer_sudoku.py), I found that there is a type error when structural_feasibility=False for QPFunction. image

The code is as below with pytorch=1.11.0 and numpy=1.24.3.

import torch import torch.nn as nn from proxsuite.torch.qplayer import QPFunction

nx = 64 Q = 0 torch.eye(nx).double() p = torch.ones(nx).double() G = -torch.eye(nx).double() h = torch.zeros(nx).double() l = -1.0e20 torch.ones(nx).double() A = torch.rand([2,nx]).double() b = torch.ones(2).double() x, y, z, , = QPFunction(structural_feasibility=False, omp_parallel=False)( Q, p, A, b, G, l, h )

Is there something obviously wrong with my code? Thank you!

fabinsch commented 9 months ago

Hi @zbzhu ,

I just created a clean conda environment with python 3.9, installed the necessary packages from conda and I can run your code without problems.

conda create --name test_qp_layer python=3.9
conda activate test_qp_layer
conda install proxsuite -c conda-forge
conda install pytorch torchvision torchaudio cpuonly -c pytorch
conda install ipython

then I can run

In [1]: import torch
   ...: import torch.nn as nn
   ...: from proxsuite.torch.qplayer import QPFunction
   ...: 
   ...: nx = 64
   ...: Q = 0 * torch.eye(nx).double()
   ...: p = torch.ones(nx).double()
   ...: G = -torch.eye(nx).double()
   ...: h = torch.zeros(nx).double()
   ...: l = -1.0e20 * torch.ones(nx).double()
   ...: A = torch.rand([2,nx]).double()
   ...: b = torch.ones(2).double()
   ...: x, y, z, _, _ = QPFunction(structural_feasibility=False, omp_parallel=False)(
   ...: Q, p, A, b, G, l, h
   ...: )

In [2]: 

I have numpy = 1.26.4 and torch = 2.2.1+cpu.

fabinsch commented 9 months ago

Hi again @zbzhu , in #308, we replace torch.Tensor which is an alias for the default tensor type (torch.FloatTensor). It could be the source of the problem that you are seeing, but as I am not able to reproduce your problem I cannot check.

You could integrate the changes from #308 and see if it helps.

zbzhu commented 8 months ago

Thanks for your reply! After updating torch = 2.2.1, the code can run.

However, I confuse about the implementation in "QPFunctionFn_infeas". Why applying the "cat" function to the lower bounds and upper bounds of inequality constraints on first dimension? image

There are two situations: 1)If the inputs are the tensor in batch format (i.e., batch, n, n]), then they are concatenated in the batch dimension, and the upper bound constraints would be ignored in the subsequent code. 2) If the inputs are the tensor of size [n, n], all these constraints are considered, and the code renew a lower bound as -1e20.

Is there a misunderstanding?

fabinsch commented 8 months ago

Hi @zbzhu, thanks for pointing this out. I checked it and you are right, this here is not behaving as expected if the upper and lower bounds for the ineq. constraints are already having the batch size as at 0 dimension.

In our examples, we considered the bounds to be the same for all our QPs. So, ee just defined the bounds as a simple vector of size n_ineq and that's why we missed this point.

I will provide a fix. But for now, if it is the case for you and the bounds are not changing, just pass a vector of shape (n_in,).

fabinsch commented 8 months ago

It's fixed here.

We first make sure that all matrices and vectors are expanded to the proper batch size in dimension 0, and then we concatenate h=[-l, u] and G=[-G, G] along axis 1 as the derivation for infeasible QPs was done for single sided constraint in our work.

ping @Bambade.