Closed phK3 closed 1 year ago
Dear @phK3,
Thank you very much for addressing this issue.
As far as I am aware, the NNET
format is ambiguous with respect to the final activation layer.
The format only implicitly adds ReLU as activation functions,
thus, it is unclear if there should be an activation layer after the last linear layer.
CORA adds a ReLU activation after each linear layer,
which results in the described zero vector as an output vector.
We prefer the ONNX
format due to this ambiguity.
Hope that helps!
Note: CORA gets a big update regarding the verification of neural networks with the next major release,
which eliminates the neuralNetworkOld
class.
Closing this issue as CORA v2024 is now released.
Concrete evaluation of
neuralNetworkOld
and overapproximate evaluation with zonotopes gives wrong results.The execution of the following code:
produced the output
The concrete evaluation also results in
However, evaluating the same
.nnet
network in a different program or loading the corresponding.onnx
network in CORA results in the output ofThe
.nnet
networks are available in this repo, while the.onnx
networks and.vnnlib
specifications are available in this repo.