Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
There are large differences between SHAP values calculated using CPU and GPU for XGBoost models with feature_perturbation='interventional' and model_output='log_loss'. The detailed description of the bug including how to reproduce it and the traceback are provided in https://github.com/shap/shap/issues/3655. The issue seems to be originating at the shap implementation of the xgboost resposiory as mentioned in that issue. It would be great if this can be fixed.
There are large differences between SHAP values calculated using CPU and GPU for XGBoost models with
feature_perturbation='interventional'
andmodel_output='log_loss'
. The detailed description of the bug including how to reproduce it and the traceback are provided in https://github.com/shap/shap/issues/3655. The issue seems to be originating at the shap implementation of the xgboost resposiory as mentioned in that issue. It would be great if this can be fixed.