jump-dev / Convex.jl

A Julia package for disciplined convex programming
https://jump.dev/Convex.jl/stable/
Other
567 stars 121 forks source link

Scalarize everything? #622

Closed ericphanson closed 5 months ago

ericphanson commented 5 months ago

I was thinking again that a lot of issues like indexing being slow, and #509, lacking multidimensional arrays, and some of the general clunkiness of the code base could be solved by moving to a JuMP/MOI style of working with scalars rather than vectors/matrices, and using native julia containers. It also could mean easier interop with MOI; currently we can only go to MOI at the “ends” of the problem (eg when actually adding a constraint or objective), not in the middle (if we want to do something more at the Convex.jl-level afterwards).

I wonder if a scalar foundation is possible while keeping the current syntax? I think Convex.jl is more approachable for non-OR folks than JuMP (eg quantum info folks and other fields), since it is closer to how problems are written down in those fields, and avoids macros. But it’s not really clear to me if things could still work smoothly using scalars and without a macro interface. I’m also not 100% sure what can go wrong…

Would appreciate any thoughts @odow (or anyone else)

ericphanson commented 5 months ago

I guess one obvious problem with native containers is dispatch, since we have an operator-overloading paradigm. Maybe we could have a ArrayVariable{N} which is a light wrapper around Array{MOI.VariableIndex, N} though. So it wouldn’t be bring-your-own container but would still be more flexible than currently (ie multidimensional, can support broadcasting, indexes trivially).

edit: ah, but if we have a MOI.VariableIndex, then we must already be associated to a model... would be nice to not need a reference to a model to create a variable, like now.

ericphanson commented 5 months ago

I think maybe if we don’t want to have to create the model first and add variables to it etc like in JuMP, then it’s kinda hard to do the scalar approach, since you can’t resolve objects until you have a model later on. So everything is a lazy representation (on the Convex.jl level) until we have a MOI model and can trace the tree and add to it.

whereas if we had model from the start, we could kinda eagerly resolve bits and pieces of problems into VAFs and SAFs and whatever and have an eager representation of it at all times, and operator overloading functions would basically be doing MOI.Utilities.operate on them, and/or adding constraints to the model etc, at call time (not at some later conic form time).

it’s hard to do scalars with the lazy approach we have now, since we have to trace everything, and there’s a lot of overhead to track each scalar individually.

odow commented 5 months ago

I've thought about this. The only sensible decision is to make Convex2.jl which is the scalar version. Trying to do a rewrite in-place is too complicated.

ericphanson commented 5 months ago

yeah... I'll close this, nothing really to do here. I played around with it today and some last notes: