plasmo-dev / Plasmo.jl

A Platform for Scalable Modeling and Optimization
Other
143 stars 20 forks source link

Complete Plasmo.jl rewrite and updates for JuMP nonlinear interface #105

Closed jalving closed 4 weeks ago

jalving commented 1 month ago

This is a mega PR that re-writes the core of Plasmo.jl to fully use the new JuMP nonlinear interface in addition to a refactor meant to make the package more graph-centric (versus the current v0.5 which is very node-centric).

The main theme of Plasmo.jl is now to treat an optigraph as an optimization problem made of nodes and edges that a user is free to modify, or generate a new optigraph using partitioning or querying the graph topology. The key change is to strictly use graphs as subproblems (as opposed to nodes) which standardizes potential solution approaches users might take (it is still possible to optimize individual nodes, but it creates a new graph internally that contains one node).

In the v0.5 version of Plasmo.jl, we use a JuMP.Model for each node which does not scale in cases where nodes contain few variables and constraints and the graph consists of thousands of nodes (e.g. for dynamic optimization problems with collocation methods). The new implementation makes nodes and edges more lightweight by having their model data stored in the graph they are created in. The v0.5 implementation is also quite hacky with how it achieved the integration of JuMP.Model objects and the optigraph. It is still possible to do things like set a JuMP.Model to a node, but it copies the model data over; the JuMP.Model is not modified by performing this operation. This also means users should use the optinode once they set a JuMP.Model if they want to do further modifications to the graph.

Other major changes include:

Short-term issues to address:

The Long-term Roadmap

GraphOptInterface

There are a couple directions we could take Plasmo.jl from here. I think developing GraphOptInterface.jl and using it to interface with MadNLP.jl to do Schur decomposition could be a useful start. I have always wanted to make a standard interface to DSPopt.jl and HiOp.jl although the packages do not seem to be currently maintained. We would also need to think about how to do distributed computing with GraphOptInterface.jl.

Distributed OptiGraphs

We could also develop distributed optigraphs that work the same way as normal optigraphs (possibly by parameterizing the optigraph with some new types). These graphs could support writing standard distributed algorithms using the same methods that exist on the current OptiGraph. The way we manage distributed structures could also use or mirror what we do in GraphOptInterface.jl.

dlcole3 commented 1 month ago

@jalving I am running into an issue with the objective function on a node under the new version. If I do

g = OptiGraph()
@optinode(g, n)
@variable(n, x[1:3])
@objective(n, Min, x[1] + x[2])

obj_func = objective_function(n)
add_to_expression!(obj_func, x[3] * 2)
JuMP.set_objective_function(n, obj_func)

I get an error that The objective function .... is not supported by JuMP.

dlcole3 commented 1 month ago

Also, the JuMP.delete_upper_bound extension deletes the lower bound: https://github.com/jalving/Plasmo.jl/blob/0ac2b8c175b627c123665bd4df29e367ca77f3cd/src/node_variables.jl#L272-L275

dlcole3 commented 1 month ago

The objective value does not seem to be computed using the node objectives in the new version. For instance, in the code below, it returns the objective value as 0, but the true objective is 3. If you query the values of the variables, they do show 1 each. The HiGHS output also shows an objective value of 0. I am guessing this is because the graph no longer takes the objective values from the nodes? Are objectives supposed to be set directly on the graph now? If so, it might be worth adding a deprecation warning when @objective is called on a node rather than a graph.

using Plasmo, HiGHS
g = OptiGraph()
@optinode(g, n)
@variable(n, x[1:3] >= 1)
@objective(n, Min, x[1] + x[2] + x[3])

set_optimizer(g, HiGHS.Optimizer)

optimize!(g)

println(value.(x))

println(objective_value(g))
jalving commented 1 month ago

@jalving I am running into an issue with the objective function on a node under the new version. If I do

g = OptiGraph()
@optinode(g, n)
@variable(n, x[1:3])
@objective(n, Min, x[1] + x[2])

obj_func = objective_function(n)
add_to_expression!(obj_func, x[3] * 2)
JuMP.set_objective_function(n, obj_func)

I get an error that The objective function .... is not supported by JuMP.

This is a case of a missing method. I'll add this in the next commit. set_objective(n, sense, obj_func) should work.

Also, the JuMP.delete_upper_bound extension deletes the lower bound: https://github.com/jalving/Plasmo.jl/blob/0ac2b8c175b627c123665bd4df29e367ca77f3cd/src/node_variables.jl#L272-L275

This is most certainly a bug. Let me look into it.

The objective value does not seem to be computed using the node objectives in the new version. For instance, in the code below, it returns the objective value as 0, but the true objective is 3. If you query the values of the variables, they do show 1 each. The HiGHS output also shows an objective value of 0. I am guessing this is because the graph no longer takes the objective values from the nodes? Are objectives supposed to be set directly on the graph now? If so, it might be worth adding a deprecation warning when @objective is called on a node rather than a graph.

using Plasmo, HiGHS
g = OptiGraph()
@optinode(g, n)
@variable(n, x[1:3] >= 1)
@objective(n, Min, x[1] + x[2] + x[3])

set_optimizer(g, HiGHS.Optimizer)

optimize!(g)

println(value.(x))

println(objective_value(g))

You now need to call set_to_node_objectives(graph) to construct the graph objective function from nodes. We could always check whether the graph objective is empty and call this internally on optimize!, but there might be cases users do not want that to happen. Since it will probably cause issues for existing users migrating to this version I'll go ahead and change it and add a print statement.

jalving commented 1 month ago

@dlcole3 I opted to print a warning if the user has an empty graph objective when they have node objectives. I think for now it makes more sense to have users specify the graph objective directly (which means use set_to_node_objectives). We could potentially allow passing node objective updates directly to the graph, but that assumes the objective function is separable.

codecov-commenter commented 1 month ago

:warning: Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 81.91824% with 345 lines in your changes missing coverage. Please review.

Project coverage is 80.85%. Comparing base (1702300) to head (8bacd4a). Report is 2 commits behind head on main.

Files Patch % Lines
src/node_variables.jl 70.68% 85 Missing :warning:
src/optigraph.jl 79.94% 79 Missing :warning:
src/aggregate.jl 57.04% 64 Missing :warning:
src/jump_interop.jl 69.66% 54 Missing :warning:
src/optiedge.jl 67.74% 20 Missing :warning:
src/backends/moi_backend.jl 96.07% 13 Missing :warning:
src/graph_functions/partition.jl 95.90% 10 Missing :warning:
src/graph_functions/kahypar.jl 46.66% 8 Missing :warning:
src/graph_functions/projections.jl 95.12% 6 Missing :warning:
src/optielement.jl 91.52% 5 Missing :warning:
... and 1 more

:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.

:exclamation: There is a different number of reports uploaded between BASE (1702300) and HEAD (8bacd4a). Click for more details.

HEAD has 1 upload less than BASE | Flag | BASE (1702300) | HEAD (8bacd4a) | |------|------|------| ||2|1|
Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #105 +/- ## ========================================== - Coverage 87.79% 80.85% -6.95% ========================================== Files 19 16 -3 Lines 2368 2246 -122 ========================================== - Hits 2079 1816 -263 - Misses 289 430 +141 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

dlcole3 commented 1 month ago

@jalving I noticed you extended the JuMP.set_objective_function for nodes here. Can this be added for graphs as well (i.e., for setting the objective on the OptiGraph)?

jalving commented 1 month ago

@jalving I noticed you extended the JuMP.set_objective_function for nodes here. Can this be added for graphs as well (i.e., for setting the objective on the OptiGraph)?

Yes, those methods should work as seen here. Are you running into issues? It works in the examples.

Or do you mean you want the node update to automatically update the graph objective? That should be possible, but I will need to think it through a bit.

dlcole3 commented 1 month ago

@jalving I noticed you extended the JuMP.set_objective_function for nodes here. Can this be added for graphs as well (i.e., for setting the objective on the OptiGraph)?

Yes, those methods should work as seen here. Are you running into issues? It works in the examples.

Or do you mean you want the node update to automatically update the graph objective? That should be possible, but I will need to think it through a bit.

My bad. I made a mistake in that I did git pull but I had some unstashed errors, so it had not pulled the newest changes. It does work for graphs.

It does look like the warning statement when no objective is defined does throw an error saying that has_node_objective is not defined (from here). Below is a MWE for this:

using Plasmo, HiGHS

g = OptiGraph()

@optinode(g, n[1:2])

for node in n
    @variable(node, x >= 0)
    @objective(node, Min, x)
end

set_optimizer(g, HiGHS.Optimizer)
optimize!(g)

Also, it might be worth updating the output for variables. For the code above, if I call n[1][:x], it outputs the variable in the REPL as An Abstract JuMP Model[:x].

jalving commented 1 month ago

It does look like the warning statement when no objective is defined does throw an error saying that has_node_objective is not defined (from here). Below is a MWE for this:

Also, it might be worth updating the output for variables. For the code above, if I call n[1][:x], it outputs the variable in the REPL as An Abstract JuMP Model[:x].`

Interesting, these both work for me. Can you make sure you have a clean git pull?

jalving commented 1 month ago

it looks like doing index_map_FS = index_map[F, S] here is no longer valid Julia code. I'll look into a workaround. This turned out to be a bug that didn't get picked up in stable julia

jalving commented 1 month ago

@odow All of the private MOI methods should be gone with this PR. Let me know if you come across anything problematic.