dais-ita / interpretability-papers

Papers on interpretable deep learning, for review
29 stars 2 forks source link

Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples #21

Open richardtomsett opened 6 years ago

richardtomsett commented 6 years ago

Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples We address the problem of extracting an automaton from a trained recurrent neural network (RNN). We present a novel algorithm that uses exact learning and abstract interpretation to perform efficient extraction of a minimal automaton describing the state dynamics of a given RNN. We use Angluin's L* algorithm as a learner and the given RNN as an oracle, employing abstract interpretation of the RNN for answering equivalence queries. Our technique allows automaton-extraction from the RNN while avoiding state explosion, even when the state vectors are large and fine differentiation is required between RNN states. We experiment with automata extraction from multi-layer GRU and LSTM based RNNs, with state-vector dimensions and underlying automata and alphabet sizes which are significantly larger than in previous automata-extraction work. In some cases, the underlying target language can be described with a succinct automata, yet the extracted automata is large and complex. These are cases in which the RNN failed to learn the intended generalization, and our extraction procedure highlights words which are misclassified by the seemingly "perfect" RNN.

Bibtex: @misc{1711.09576, Author = {Gail Weiss and Yoav Goldberg and Eran Yahav}, Title = {Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples}, Year = {2017}, Eprint = {arXiv:1711.09576}, }