Pyomo / pyomo

An object-oriented algebraic modeling language in Python for structured optimization problems.
https://www.pyomo.org
Other
1.98k stars 510 forks source link

Scaling transformations on Block structured models #898

Closed andrewlee94 closed 1 year ago

andrewlee94 commented 5 years ago

I have been working with the Pyomo scaling transformation in preparation for implementing this within IDAES, and I have run into some unexpected behaviour when working with block-structured models. It appears that only scaling factors declared at the top level of the model are considered by the scaling transformation, and that it does not descend into the block tree to look for additional scaling factors.

I have tested a few cases to try to work out what was happening.

Case 1:

import pyomo.environ as pe

###
# create the original unscaled model
###
model = pe.ConcreteModel()

model.b = pe.Block()
model.b.a = pe.Var(initialize=10, bounds=(0,20))

###
# set the scaling parameters
###
model.b.scaling_factor = pe.Suffix(direction=pe.Suffix.EXPORT)
model.b.scaling_factor[model.b.a] = 10

###
# build and solve the scaled model
###
scaled_model = pe.TransformationFactory('core.scale_model').create_using(model)

Returns: ValueError: ScaleModel transformation called with scaling_method='user', but cannot find the suffix 'scaling_factor' on the model

This is possibly expected behaviour, but without the scaling_factor suffix present at the top level, the transformation returns an error. I do find it strange that the error is a ValueError - I would generally expect an AttributeError in the case where the attribute was missing completely.

Case 2:

import pyomo.environ as pe

###
# create the original unscaled model
###
model = pe.ConcreteModel()

model.b = pe.Block()
model.b.a = pe.Var(initialize=10, bounds=(0,20))

###
# set the scaling parameters
###
model.scaling_factor = pe.Suffix(direction=pe.Suffix.EXPORT)
model.b.scaling_factor = pe.Suffix(direction=pe.Suffix.EXPORT)
model.b.scaling_factor[model.b.a] = 10

###
# build and solve the scaled model
###
scaled_model = pe.TransformationFactory('core.scale_model').create_using(model)

# print the scaled model
scaled_model.b.scaled_a.pprint()

Returns:

scaled_a : Size=1, Index=None
    Key  : Lower : Value : Upper : Fixed : Stale : Domain
    None :   0.0 :  10.0 :  20.0 : False : False :  Reals

The model has been transformed, and the variable a replaced by scaled_a, but the value and bounds have not been scaled. It appears that the scaling factor(s) defined in child blocks are not being picked up by the transformation.

Case 3:

import pyomo.environ as pe

###
# create the original unscaled model
###
model = pe.ConcreteModel()

model.b = pe.Block()
model.b.a = pe.Var(initialize=10, bounds=(0,20))

###
# set the scaling parameters
###
model.scaling_factor = pe.Suffix(direction=pe.Suffix.EXPORT)
model.scaling_factor[model.b.a] = 10

###
# build and solve the scaled model
###
scaled_model = pe.TransformationFactory('core.scale_model').create_using(model)

# print the scaled model
scaled_model.b.scaled_a.pprint()

Returns:

scaled_a : Size=1, Index=None
    Key  : Lower : Value : Upper : Fixed : Stale : Domain
    None :   0.0 : 100.0 : 200.0 : False : False :  Reals

This works now, and the variable value and bounds have been scaled.

It would be good if the scaling transformation would descend into the block structure to look for scaling factors, as it makes more sense to declare these at the same level as the component is declared. This also simplifies modular construction of the scaling factors, as each block can be responsible for its own scaling factors, without having to propagate these up to the top level of the model.

qtothec commented 5 years ago

What behavior would you expect if there are inconsistencies in the scaling factors specified at different levels?

carldlaird commented 5 years ago

Thanks Andrew. I will look into this. I need to figure out what is happening with the suffixes and how they propagate. Thanks for the example.

However, I will also point out that I would NOT use the scaling transformation - I would use the user scaling option in IPOPT directly. It uses the same suffixes. The scaling transformation is there if you want to perform scaling with a solver other than IPOPT. (Having said that, it should work, so I will look into it.)

andrewlee94 commented 5 years ago

@qtothec I had not considered inconsistencies at different levels, which is a big issue. The expected behaviour (from a naive user) would be to use the most recently specified scaling factor. That would be a nightmare to implement however, so my first response would be to use the scaling factor defined at the same level as the component is declared.

@carldlaird I would agree that using IPOPT's scaling directly would be the best course, but for a general, inexperienced user we would like a common method that works for everything. Advanced users could then use IPOPT's scaling directly if they know how to do so (or any other solver they wish to use).

qtothec commented 5 years ago

@carldlaird This is a bit of an aside, but... does that mean that if a user solves with IPOPT and had done the scaling transformation, the model would be scaled twice?

carldlaird commented 5 years ago

I don't recall if the scaling transformation removes the scaling suffixes or not. However, you also need to turn on an option to IPOPT to use the scaling, so you would need to explicitly ask for (1) a scaling transformation, and (2) for IPOPT to also use the scaling parameters.