Closed Kenneth-T-Moore closed 2 years ago
Thanks for opening the issue. @kanekosh will look into it.
I believe in pyOptSparse, the b
part of the constraint is supplied as part of the bound information. For example, if you have a constraint c = 2*x + 100
and you want to constrain c
between -10 and 10, you will have to call addCon
with lower=-110
and upper=-90
. Then it should work. In other words, the constraint LB < Ax + b < UB
has to be given as LB-b < Ax < UB-b
.
I think the documentation can definitely be improved, there's very little information on linear constraints.
I'm considering adding the y-intercept b
as an optional input to addConGroup
. If b
is provided, we impose LB < Ax + b < UB
, and otherwise LB < Ax < UB
by default.
@Kenneth-T-Moore Are there any specific reasons you suggested requiring an initial function value of constraint instead of b
? They are equivalent, but I think taking b
as input is more straightforward than taking the function value and back-calculate b
internally. In other words, would it be easy to compute b
on the OpenMDAO side?
@kanekosh : I think the main reason is that, in order to get b
, you would need a run of your openmdao model with design variables at 0, but that might lead to dividing by zero in some other part of the model. At the time you submit the jac
for the linear constraint, you've already run the openmdao model at the design point, so the computing the constraint value there is free.
@Kenneth-T-Moore I see that we don't want to run the model with all-zero input. But once you've run the model at a (non-zero) design point x
and computed the value of the linear constraint g
, in theory you should be able back-compute b = g - Ax
either in the OpenMDAO side or in pyOptSparse.
I think the point is, on which side (OM or pyOptSparse) we should actually compute b
. We'd prefer OM to compute b
and pyOptSparse to take b
as input in addConGroup
. However, I'm not sure if computing b = g - Ax
in OM is possible/easy implementation-wise.
@kanekosh Oh yeah, I don't know what I was thinking. If you can compute b, we can compute it too. Though, that also means that, technically, we could use the computed b to shift the bounds, and then no change to pyoptsparse would be required.
@Kenneth-T-Moore Yes, the change to pyOptSprase is technically not necessary to fix this issue in that way. But we'll still add b
as an optional input because I believe that'd be more intuitive and slightly more helpful to users.
Edit: this time, we will just update the documentation and will not add b
.
I personally think that if you are computing b
, it should be very straightforward to do the subtraction. I'd prefer not to change the API because
b
is only used for linear constraints, so adding it to the general addConstraint function would make it a bit less cleanI am open to creating a separate function, called addLinearConstraints
or something, that would accept A
and b
, but I would prefer not modifying the existing call signature.
Description
If I provide a constraint defined by
y = m*x + b
, and set linear=True for the ConGroup, it is treated as if it werey - m*x
.Steps to reproduce issue
I've put together a simple test here
The problem has 2 constraints. One of them has a very high constant term ->
y = 2x + 100
. This term insures there is no feasible solution to the problem. When this constraint is defined with linear=False, SNOPT finds the problem infeasible, which is expected. When linear=True, SNOPT finds a solution (exit 0, info 1), even though the problem is unsolvable. This is because it ignores the constant term in the equation. You can verify by changing the value of LINEAR near the top of the test file.Current behavior
I think what is happening is that pyoptsparse's knowledge of this constraint comes entirely from the jacobian we give it in
addConGroup
. We never give it an initial value, nor does it ever ask for the current value when it calls the objfun. From the jacobian, it can only evaluate the equation asy = m*x
, which is incorrect.Expected behavior
This is obviously a simplified, nonsensical problem, but I expect that it should fail to find a solution if the linear constraint was working as expected. This can be accomplished as follows:
When LINEAR=True, require an initial value for the constraint. That way, pyoptsparse can back-calculate the correct y-intercept for the constraint. The initial value should correspond to the initial design variable values that are given to pyoptsparse.
Code versions