Open g-poveda opened 1 year ago
This is not trivial to do but it can be done by inspecting the modeling object that is created when the neural network is added to the gurobipy model (the object returned by add_predictor_constr).
This object should contain the layers of the network and for each layer it's input and output.
It would take me a bit of time but I could make an example.
Many thanks for the help and happy to get an example. to update afterward the bounds after the add_predictor_constr, would you advise to use "setAttr("lb"/"ub",value)", add additional constraint or is there a better way ?
Doing a very simple example is not so long. What I did doesn't really make sense but I hope it shows how I would do it.
it's a variation of the adversarial example from the documentation were after solving the model I put bounds on the output of the first layer (you can search for "Add bounds")
to update afterward the bounds after the add_predictor_constr, would you advise to use "setAttr("lb"/"ub",value)", add additional constraint or is there a better way ?
Yes I would change directly the bounds of the variables as I did in the notebook.
Excellent. if you're interested i might fork and give a try to add some function that compute valid bounds and update the gurobi model (using interval arithmetic or this kind of method https://github.com/eth-sri/eran) ? or maybe you're interested to look at it yourself too :)
Yes that would be nice. I am not sure how much I can do in this direction but it would definitely be interesting.
Thinking a bit more about this, there is a documentation effort to make. In most cases those intermediate variables are stored but that part is not documented. This should definitely be done at some point. But it is tedious to go through all objects (and also make sure that names are somewhat consistent).
I'll try to start with neural networks but I won't promise a delivery date.
It would be interesting to have a way of providing bounds value for neurons similarly as done in this function (add_output_vars). with correct bounds, it could for example discard many constraints or binary variables introduced to model the Relu layers.
Computation time could be better but not necessarly (i will try to do some experiments)