Open nochimake opened 2 months ago
HI @nochimake --
For local importance, you can use the eval_terms function: https://interpret.ml/docs/python/api/ExplainableBoostingClassifier.html#interpret.glassbox.ExplainableBoostingClassifier.eval_terms
If you also want the global importances, those can be obtained with the term_importances function: https://interpret.ml/docs/python/api/ExplainableBoostingClassifier.html#interpret.glassbox.ExplainableBoostingClassifier.term_importances
I have a text sentiment polarity prediction model, roughly structured as RoBERTa + CNN. Now, I want to use InterpretML to explain its prediction results. My code is as follows:
Where
DataGenerator
is the text processing class for my model. Here, I'm temporarily using RoBERTa's tokenizer to map the text to the required token IDs for modeling. y_train represents the labels predicted by my model. After the statementebm_local = ebm.explain_local(X_train, y_train)
, how can I obtain the importance of each word? I have seen people using theebm_local.get_local_importance_dict()
method, but I can't find this method in version 0.5.1.