christophM / interpretable-ml-book

Book about interpretable machine learning
https://christophm.github.io/interpretable-ml-book/
Other
4.78k stars 1.06k forks source link

how to figure out whether the model works in the right way or not? #186

Closed chansource closed 4 years ago

chansource commented 4 years ago

Dear sir: Since the model explainer gives an insight into how the change of the features affect the prediction of the model, is there any criterion to figure out wether the model work in the right way?

For example, if the explainer says that mortality decreases with age, the model maybe wrong. But it is difficult to know how should the prediction changes with the feature in practice. So i wander whether there is any criterion can help.

Furthermore, if the model is not right ,e.g. caused by corelation, is there any common tools can point out how to fix ?

thank you in advance for your help.

christophM commented 4 years ago

One way to know whether the model is "right" is to check that it has a good prediction performance. If your model is bad, you can do all the interpretation you want, but it will be garbage as well.

The other question, whether the model identified the causal relationships is more difficult. For this I would suggest you have a look into the Causality literature.