the summary is that a proof from game theory on the fair allocation of profits leads to a uniqueness result for feature attribution methods in machine learning. These unique values are called Shapley values, after Lloyd Shapley who derived them in the 1950’s. The SHAP values we use here result from a unification of several individualized model interpretation methods connected to Shapley values.
an algorithm that can explain the predictions of any classifier or regressor in a faithful way, by approximating it locally with an interpretable model
R. Kohavi et al., A Study of Cross-validation and Bootstrap for Accuracy Estimation and Model Selection, International Joint Conference on Artificial Intelligence (IJCAI), 14(12), Seiten 1137–1145, 1995
10-fold cross-validation is the best compromise between bias and variance in most cases
this study concentrates on cross-validation based model selection, the findings are quite general and apply to any model
selection practice involving the optimisation of a model selection criterion evaluated over a finite
sample of data, including maximisation of the Bayesian evidence and optimisation of performance
bounds
interpretability
Causal reasoning and machine learning
Model performance
AI in medicine
When to Impute? Imputation before and during cross-validation
Interpretability for Deep Neural Network
2020-12-A Survey on Neural Network Interpretability
Paul Allison
2014-Paul Alison-Prediction vs. Causation in Regression Analysis _ Statistical Horizons.pdf
Inference and Prediction comparison:
R. Kohavi et al., A Study of Cross-validation and Bootstrap for Accuracy Estimation and Model Selection, International Joint Conference on Artificial Intelligence (IJCAI), 14(12), Seiten 1137–1145, 1995
2010-Journal of Machine learning research-On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation