Closed mewilhel closed 4 years ago
From v0.4 onward each nonlinear function now has it's own sparsity and constructs subgradients using this. Within function sparsity remains a potential area for improvement as does potentially implementing a reverse-mode subgradient propagation.
Currently EAGO when provided a problem in containing
m
nonlinear variables will use subgradients of sizem
in all nonlinear expressions. We currently get sparsity information from the nonlinear structures when translating theJuMP.NLPEvaluator
to theEAGO.Evaluator
, so it would make sense to useMC{2,NS}
objects when computing a two-variable nonlinear term and then usinggrad_sparsity
values for unpack this to the appropriate components of the subgradient.