Open SiyuanPengMike opened 5 years ago
There's an overview, from ML perspective, on the importance of interpretability here: https://christophm.github.io/interpretable-ml-book/interpretability-importance.html
There's an overview, from ML perspective, on the importance of interpretability here: https://christophm.github.io/interpretable-ml-book/interpretability-importance.html
Thanks!
Thanks for your great topic and inspiring paper. I indeed learned a lot about the using of supervised machine learning in political science. However, I still feel a little bit puzzled about the limitation of this method. In your conclusion, you said that "One limitation of the supervised learning approach is that it reveals relatively little about the details of the mapping process." I'm not quite clear that why this is a limitation. I mean, the reason why we use machine learning is that it can interpret data in a way which human can't. It can make connections to data in a more sophisticated way. Why we need to know the mapping process. If a weird mapping process is actually efficient in predicting future outcomes, should we abandon it or adjust it so that it makes sense to us?