cog-imperial / OMLT

Represent trained machine learning models as Pyomo optimization formulations
Other
279 stars 59 forks source link

Error building BigM reformulation for ReLU activations #150

Open isaingit opened 5 months ago

isaingit commented 5 months ago

Dear developers,

I am getting the following exception when trying to embed a ReLU network in my optimization model: Exception has occurred: TypeError '>' not supported between instances of 'NoneType' and 'int'

This is the piece of code that triggers the error:

model = ConcreteModel()
model.nn = OmltBlock()

keras_model = tf.keras.models.load_model('ip_model_1406S.keras', compile=False)
input = tf.keras.Input(shape = (30,), name = 'IN') # n_scenarios x n_periods*n_products. This command defines an input tensor, but it is not a layer! 
fwd_model = tf.keras.Sequential()
fwd_model.add(tf.keras.Input(shape = (30,), name = 'IN'))
fwd_model.add(tf.keras.layers.Dense(units = keras_model.layers[6].units , activation = 'relu', name='H1'))
fwd_model.add(tf.keras.layers.Dense(units = keras_model.layers[7].units , activation = 'relu', name='H2'))
fwd_model.add(tf.keras.layers.Dense(units = keras_model.layers[8].units , activation = 'linear', name='OUT'))

fwd_model.layers[0].set_weights(keras_model.layers[6].get_weights())
fwd_model.layers[1].set_weights(keras_model.layers[7].get_weights())
fwd_model.layers[2].set_weights(keras_model.layers[8].get_weights())

net = load_keras_sequential(fwd_model)
formulation_comp = ReluBigMFormulation(net)
model.nn.build_formulation(formulation_comp)

I'd really appreciate it is someone can explain the error and provide a possible fix.

Thanks in advance!

zkilwein commented 4 months ago

I encountered the same error a few weeks ago. Make sure you provide bounds on the input variables for ReLU Big-M

scaled_ins={0:(0,1),1:(0,1),...,29:(0,1)} net = load_keras_sequential(fwd_model, scaled_input_bounds=scaled_ins)

isaingit commented 3 months ago

Thank you very much for spotting the issue, @zkilwein! That was really helpful.