dais-ita / interpretability-papers

Papers on interpretable deep learning, for review
29 stars 2 forks source link

Interpretable deep neural networks for single-trial EEG classification #47

Open richardtomsett opened 6 years ago

richardtomsett commented 6 years ago

Interpretable deep neural networks for single-trial EEG classification Background In cognitive neuroscience the potential of deep neural networks (DNNs) for solving complex classification tasks is yet to be fully exploited. The most limiting factor is that DNNs as notorious ‘black boxes’ do not provide insight into neurophysiological phenomena underlying a decision. Layer-wise relevance propagation (LRP) has been introduced as a novel method to explain individual network decisions. New method We propose the application of DNNs with LRP for the first time for EEG data analysis. Through LRP the single-trial DNN decisions are transformed into heatmaps indicating each data point's relevance for the outcome of the decision. Results DNN achieves classification accuracies comparable to those of CSP-LDA. In subjects with low performance subject-to-subject transfer of trained DNNs can improve the results. The single-trial LRP heatmaps reveal neurophysiologically plausible patterns, resembling CSP-derived scalp maps. Critically, while CSP patterns represent class-wise aggregated information, LRP heatmaps pinpoint neural patterns to single time points in single trials. Comparison with existing method(s) We compare the classification performance of DNNs to that of linear CSP-LDA on two data sets related to motor-imaginary BCI. Conclusion We have demonstrated that DNN is a powerful non-linear tool for EEG analysis. With LRP a new quality of high-resolution assessment of neural activity can be reached. LRP is a potential remedy for the lack of interpretability of DNNs that has limited their utility in neuroscientific applications. The extreme specificity of the LRP-derived heatmaps opens up new avenues for investigating neural activity underlying complex perception or decision-related processes.

Bibtex:

@article{STURM2016141, title = "Interpretable deep neural networks for single-trial EEG classification", journal = "Journal of Neuroscience Methods", volume = "274", pages = "141 - 145", year = "2016", issn = "0165-0270", doi = "https://doi.org/10.1016/j.jneumeth.2016.10.008", url = "http://www.sciencedirect.com/science/article/pii/S0165027016302333", author = "Irene Sturm and Sebastian Lapuschkin and Wojciech Samek and Klaus-Robert Müller", keywords = "Brain–computer interfacing, Neural networks, Interpretability" }

richardtomsett commented 6 years ago

From previous review: The recently-proposed layer-wise relevance propagation (LRP) algorithm from Wojciech Samek’s group (Binder et al. 2016a, Binder et al. 2016b) uses the fact that the individual neural network units are differentiable to decompose the network output in terms of its input variables. It is a principled method that has a close relationship to Taylor decomposition and is applicable to arbitrary deep neural network architectures (Montavon et al. 2017). The output is a heatmap over the input features that indicates the relevance of each feature to the model output. This makes the method particularly well suited to analyzing image classifiers, though the method has also been adapted for text and electroencephalogram signal classification (Sturm et al. 2016). Samek et al. (2017) have also developed an objective metric for comparing the output of LRP with similar heatmapping algorithms.

*Binder et al. 2016a: issue #44, Binder et al. 2016b: issue #45 , Montavon et al. 2017: issue #46, Sturm et al. 2016: issue #47, Samek et al. 2017: issue #48.