LLNL / hiop

HPC solver for nonlinear optimization problems
Other
210 stars 42 forks source link

Add more options to control NLP scaling #649

Closed nychiang closed 1 year ago

nychiang commented 1 year ago

In this PR, the following user parameters are introduced:

scaling_max_obj_grad: If a positive value is given, the objective of user's NLP will be scaled so that the inf-norm of its gradient is equal to the given value. This value overwrites the value given by scaling_max_grad. Default value is 0

scaling_max_con_grad: If a positive value is given, the constraints of user's NLP will be scaled so that the inf-norm of its gradient is equal to the given value. This value overwrites the value given by scaling_max_grad. Default value is 0

scaling_min_grad: If a positive value is given, it is used as the lower bound for the scaling factors. This option has a priority, i.e., the final scaling factor computed must greater or equal to this value, even thought it may violate the values given in scaling_max_grad, scaling_max_obj_grad and scaling_max_con_grad. Default value is 1e-8

CLOSE #648

cnpetra commented 1 year ago

we should test this manually to ensure that it does not mess up the duals (and of course brings the derivatives norms at the target).

nychiang commented 1 year ago

I did some tests with NlpSparseEx1, where one can specify a scaling factor in the command line. I played with the scaling factor and 3 options introduced in this PR. I can see that all the problem converges, despite different number of iterations. The worst case is that I got some warnings about big residual from the compressed linear system.

nychiang commented 1 year ago

@cnpetra new commit has updated the option descriptions, according to the .tex file