interpretml / DiCE

Generate Diverse Counterfactual Explanations for any machine learning model.
https://interpretml.github.io/DiCE/
MIT License
1.33k stars 184 forks source link

Can it be applied to regression models? #320

Closed xxkk1006 closed 1 year ago

xxkk1006 commented 2 years ago

Examples from this book Interpretable Machine Learning . “Anna wants to rent out her apartment, but she is not sure how much to charge for it, so she decides to train a machine learning model to predict the rent. Of course, since Anna is a data scientist, that is how she solves her problems. After entering all the details about size, location, whether pets are allowed and so on, the model tells her that she can charge 900 EUR. She expected 1000 EUR or more, but she trusts her model and decides to play with the feature values of the apartment to see how she can improve the value of the apartment. She finds out that the apartment could be rented out for over 1000 EUR, if it were 15 m2 larger. Interesting, but non-actionable knowledge, because she cannot enlarge her apartment. Finally, by tweaking only the feature values under her control (built-in kitchen yes/no, pets allowed yes/no, type of floor, etc.), she finds out that if she allows pets and installs windows with better insulation, she can charge 1000 EUR. Anna has intuitively worked with counterfactuals to change the outcome.”

gaugup commented 1 year ago

@xxkk1006, we do support regression models. A sample notebook for regression is here:- https://github.com/interpretml/DiCE/blob/master/docs/source/notebooks/DiCE_multiclass_classification_and_regression.ipynb.

Let us know if you have more questions.

Regards, Gaurav

amit-sharma commented 1 year ago

Closing the issue due to inactivity. @xxkk1006 feel free to reopen in case you have more questions.

FarzanT commented 8 months ago

@amit-sharma If I understand correctly, you currently don't support regression in pytorch models, correct? I'm interested in generating counterfactual explanations for a transformer model, where the model output is a vector, and not a single number.

amit-sharma commented 8 months ago

yeah, we do not support explaining regression models in pytorch whose output is a vector.