Closed kyoungrok0517 closed 5 years ago
UPDATE: is there any difference the way CPU and GPU trained model is saved?. Maybe there's some peculiarities in model file syntax that shap library didn't taken into account. If I can get the idea then I can explain to the shap authors to request for a fix.
For the context, please read https://github.com/slundberg/shap/issues/452#issuecomment-466109399
I'm trying to get feature attribution of my model trained using dual GPU, using shap library. Specifically I'm trying to use
TreeExplainer()
.Problem The python kernel dies if I put the model as an argument of
TreeExplainer()
, without any error or C-level dump.Here's my code.
Here's the link to my model and the parameters.
Why am I posting shap-related report here? Because the author of shap library is suspecting there's some bug in the way XGBoost saves the GPU trained model. I've tried two tests the author requested me to try, and said that if both of the test fails, there should be something in the core of XGBoost not in shap. Here's the quote from the shap author:
Actually I should be very cautious to attribute the problem to XGBoost, but since this issue is quite important to me, I'm requesting the XGBoost team to see if there's error in the way my model is saved. Please test my model and figure out the problem. Thanks!