NLeSC / mcfly

A deep learning tool for time series classification and regression
Apache License 2.0
363 stars 82 forks source link

Interpretation of models #195

Open dynamic-biogeography opened 5 years ago

dynamic-biogeography commented 5 years ago

As an end user, it would be great to be able to interpret the models such as learned features, their importance and overall importance of each channel. Integration of methods as in https://github.com/marcoancona/DeepExplain could make this great tool even more valuable to the users' community.

vincentvanhees commented 5 years ago

For image-based classification there indeed are striking examples of how features can be made interpretable. For deep learning approaches on time series it is less clear (at least to me) how you would go about visualising deep learnt features. A quick google search led me to this 2016 paper by SHOAIB AHMED SIDDIQUI and colleagues, which may provide some answers. If you are aware of any other possible solutions then let us know.

dynamic-biogeography commented 4 years ago

Hi, indeed it's not a trivial matter as far as I'm aware. Apart from methods specific of deep learning, which to my knowledge are only now starting to be explored (as in the Siddiqui paper), one possibility could be to implement model-agnostic methods as found here: https://christophm.github.io/interpretable-ml-book/agnostic.html As an example: identifying which channel has more relevance in classifying some data could be achieved by permuting each channel individually and compare the drop in accuracy from the reference -non-permuted- prediction. Having this option and the possibility to visualize results in the model comparison html, for instance, would be extremely appealing.