interpretml / interpret-text

A library that incorporates state-of-the-art explainers for text-based machine learning models and visualizes the result with a built-in dashboard.
MIT License
416 stars 67 forks source link

Can I use IntrospectiveRationaleExplainer to explain pre-trained model ? #234

Open nochimake opened 9 months ago

nochimake commented 9 months ago

Hello, I have a pre-trained model for text sentiment polarity classification, with a structure roughly composed of RoBERTa+TextCNN. Can I use the Introspective Rationale Explainer to interpret its output? I aim to obtain the importance/contribution of each word towards the final predicted polarity.

Siddharth-Latthe-07 commented 4 months ago

@nochimake I would suggest to try Exaplainable AI (XAI), Explainable AI (XAI) aims to make the decision-making processes of machine learning models transparent and interpretable. Refer this:- https://github.com/explainX/explainx Through it's lime and shap libraries , it is possible to interpret the decesions of model through visualizations. You can also use the IRE for that as well, but have a look at the accuracy. In my oinion, XAI has the best one. Plz let me know, if this helps Thanks