christophM / interpretable-ml-book

Book about interpretable machine learning
https://christophm.github.io/interpretable-ml-book/
Other
4.75k stars 1.06k forks source link

No Abstract Interpretation? #126

Closed bvoq closed 4 years ago

bvoq commented 5 years ago

"Interpretable Machine Learning" has a much longer tradition in "Interpretable Software", where a machine learning program can be thought of well simply as a program.

The problem then becomes: What does the program mean, does it perform what it is supposed to do? This discipline is static analysis.

The leading paper on interpretating and proving properties on neural networks is: https://www.sri.inf.ethz.ch/publications/singh2019domain

They use a Zonotope domain to prove special properties about neural networks.

It is basically an advanced method of interval analysis applied to neural networks.

bvoq commented 5 years ago

Also maybe look at Reluplex: https://www.youtube.com/watch?v=KiKS_zaPb64 This does not scale very well compared to the work I posted but might be more approachable.

christophM commented 4 years ago

Thanks for sharing, I will have a look.