vtjeng / MIPVerify.jl

Evaluating Robustness of Neural Networks with Mixed Integer Programming
MIT License
113 stars 31 forks source link

Unclear Intermediate Layer Implementation in find_adversarial_example #145

Closed ace-shifu closed 1 year ago

ace-shifu commented 1 year ago

Hello, I am very interested in your project, but after reading the source code of find_adversarial_example, I found that only input constraints and related output constraints are shown to be set. However, intermediate layers such as affine and ReLU layers are not explicitly called. How do you call these methods from the layers directory in find_adversarial_example to accomplish intermediate layer encoding?

vtjeng commented 1 year ago

Hi @ace-shifu -- thanks for your interest.

Here's an example where you specify the neural network: https://github.com/vtjeng/MIPVerify.jl/blob/b0ebe49536656bc416544d032d8f3e7eee436d97/test/integration/sequential/generated_weights/mfc%2Bmfc%2Bsoftmax.jl#L29-L39

Let's start by taking a look at the Sequential struct: https://github.com/vtjeng/MIPVerify.jl/blob/b0ebe49536656bc416544d032d8f3e7eee436d97/src/net_components/nets/sequential.jl#L12-L25

The key chunk of code is the last line, where we defined p(x) = x |> p.layers (the equivalent of p.layers(x) in other programming languages).

Now, here's the important bit: x can either be a number or a JuMPLinearType: https://github.com/vtjeng/MIPVerify.jl/blob/b0ebe49536656bc416544d032d8f3e7eee436d97/src/net_components.jl#L5

If x is a JuMPLinearType, then the output p(x) is also a JuMPLinearType, with the appropriate constraints imposed between the input and output. (This is what the output constraints are imposed on).

Feel free to reach out if you have more questions.

ace-shifu commented 1 year ago

Thank you for your patient response. My issue has been resolved.