Closed nychiang closed 1 year ago
we should test this manually to ensure that it does not mess up the duals (and of course brings the derivatives norms at the target).
I did some tests with NlpSparseEx1
, where one can specify a scaling factor in the command line.
I played with the scaling factor and 3 options introduced in this PR. I can see that all the problem converges, despite different number of iterations. The worst case is that I got some warnings about big residual from the compressed linear system.
@cnpetra new commit has updated the option descriptions, according to the .tex file
In this PR, the following user parameters are introduced:
scaling_max_obj_grad
: If a positive value is given, the objective of user's NLP will be scaled so that the inf-norm of its gradient is equal to the given value. This value overwrites the value given byscaling_max_grad
. Default value is 0scaling_max_con_grad
: If a positive value is given, the constraints of user's NLP will be scaled so that the inf-norm of its gradient is equal to the given value. This value overwrites the value given byscaling_max_grad
. Default value is 0scaling_min_grad
: If a positive value is given, it is used as the lower bound for the scaling factors. This option has a priority, i.e., the final scaling factor computed must greater or equal to this value, even thought it may violate the values given inscaling_max_grad
,scaling_max_obj_grad
andscaling_max_con_grad
. Default value is 1e-8CLOSE #648