Open ashander opened 5 years ago
Yes, the algorithms get confused here. In general, most algorithms only guarantee convergence to a local optimum if they are given a feasible starting point.
It would be nicer to return an error code here. The trick is reliably detecting this case, since in the case of an active constraint some algorithms may approach the feasible set from the outside. An extreme case would be nonlinear equality constraint h(x)=0, in which case the converged value will usually be at best within O(xtol) of the feasible set. So returning an error code simply because the error is infeasible wouldn't be good.
If the user specifies a positive tolerance for the constraint, I suppose we could return an error if the optimum is infeasible within this tolerance. But since the default tolerance is zero I don't know that we should return an error in that case if the return value is slightly infeasible.
Well a workaround is, to check manually, if the solution respect the constrains or not. Checking manually means, to check it after you get the solution of the optimizer.
It would be very nice to get well-defined output.
This is a 2d toy problem with no feasible region within the constraints (x-y > 0 and -1 > x - y; reported as a bug in scipy here https://github.com/scipy/scipy/issues/7618 )
Using the R interface the problem is:
Both SLSQP and COBYLA return apparent convergence
SLSQP
COBYLA