Closed FrieseWoudloper closed 3 years ago
The problem arrises when the ordering of columns in the dataframe of the explainer containing the instances, is different from that in the dataframe containing the new observation.
Thank you for fixing this problem!
The function
transform_to_interpretable
returns a data frame with the wrong column names. I'm using the latestlocalModel
version from Github (0.5).Test case:
I then run this code (taken from
individual_surrogate_model
function):Contents of
encoded_data
:The column names are not correct:
class
should begender
,gender
should beage
, etcetera.I think this causes the counterintuitive LIME explanation for Johnny D's prediction from the random forest model found in section 9.6.2 of the book Explanatory Model Analysis:
http://ema.drwhy.ai/ema_files/figure-html/limeExplLocalModelTitanic-1.png
This problem is caused by a line of code in the function
transform_to_interpretable
:This solution works for my test case, but might not work for all cases:
in which case the
new_observation
argument is obsolete.