youyinnn / masc_research_knowledge_base

0 stars 0 forks source link

Study: Interpretable Machine Learning #11

Open youyinnn opened 2 years ago

youyinnn commented 2 years ago

Read through from c2 - c10 of the books, gathering knowledge asap.

Summary

Machine learning has great potential for improving products, processes and research. But computers usually do not explain their predictions which is a barrier to the adoption of machine learning. This book is about making machine learning models and their decisions interpretable.

After exploring the concepts of interpretability, you will learn about simple, interpretable models such as decision trees, decision rules and linear regression. The focus of the book is on model-agnostic methods for interpreting black box models such as feature importance and accumulated local effects, and explaining individual predictions with Shapley values and LIME. In addition, the book presents methods specific to deep neural networks.

All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? This book will enable you to select and correctly apply the interpretation method that is most suitable for your machine learning project. Reading the book is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable.


Review

youyinnn commented 2 years ago

Progress:

youyinnn commented 2 years ago

How to compute the PDP: 4.1. Partial Dependence and Individual Conditional Expectation plots

The paper of the pdp-based feature importance: https://arxiv.org/pdf/1805.04755.pdf

youyinnn commented 2 years ago

A Survey of Methods for Explaining Black Box Models

youyinnn commented 2 years ago

How to compute ALE: Alibi: Accumulated Local Effects

Implementation:

  1. https://github.com/blent-ai/ALEPython