Closed jalving closed 2 years ago
Merging #72 (e9e8530) into main (fc42b5b) will increase coverage by
0.06%
. The diff coverage isn/a
.
@@ Coverage Diff @@
## main #72 +/- ##
==========================================
+ Coverage 94.12% 94.18% +0.06%
==========================================
Files 24 24
Lines 1242 1239 -3
Branches 192 192
==========================================
- Hits 1169 1167 -2
+ Misses 43 42 -1
Partials 30 30
Impacted Files | Coverage Δ | |
---|---|---|
src/omlt/block.py | 100.00% <ø> (ø) |
|
src/omlt/neuralnet/activations/linear.py | 100.00% <ø> (ø) |
|
src/omlt/neuralnet/activations/smooth.py | 100.00% <ø> (ø) |
|
src/omlt/neuralnet/layers/full_space.py | 100.00% <ø> (ø) |
|
src/omlt/neuralnet/layers/reduced_space.py | 100.00% <ø> (ø) |
|
src/omlt/io/onnx.py | 85.71% <0.00%> (ø) |
|
src/omlt/scaling.py | 100.00% <0.00%> (ø) |
|
src/omlt/formulation.py | 92.53% <0.00%> (ø) |
|
src/omlt/gbt/__init__.py | 100.00% <0.00%> (ø) |
|
... and 9 more |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 7787c00...e9e8530. Read the comment docs.
Good catch! There was a typo such that it built the Big-M formulation again. You were correct; they do build different problems with different numbers of variables and constraints.
The latest commit should show this.
This PR adds the
ReluPartitionFormulation
to theneural_network_formulations.ipynb
notebook. I updated text cells to reflect the addition. The formulation is trivial for a single input, but it does show how one might use it to partition inputs.I also updated the mnist adversary examples. I noticed they still referenced the old
NeuralNetworkFormulation
. I updated them to sayFullSpaceNNFormulation
and said they could technically use theReluBigMFormulation
. Please have a look over the changes.