interpretml / DiCE

Generate Diverse Counterfactual Explanations for any machine learning model.
https://interpretml.github.io/DiCE/
MIT License
1.31k stars 185 forks source link

Implementation of causality in DICE? #166

Open Saladino93 opened 3 years ago

Saladino93 commented 3 years ago

I am not sure if I can ask here, or email the authors of the paper.... I really do not know with whom to discuss this, but I will try my luck here.

After reading the DICE paper, my understanding is that DICE as it is does not consider causal relationships among features.

What I would like Suppose I have a structure model graph for all of my variables (for example made by hand, or an augmented causal structure learning graph). I would like to use this graph inside the function generate_counterfactuals of DICE, something like this:

generate_counterfactuals(graph, query_instances, .....)

and generate feasible and more realistic results.

Is this currently done somewhere?

I skimmed this paper https://arxiv.org/abs/1912.03277 (Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers), but it is not clear to me that you can use a general graph, plus seems computationally expensive and not very practical...

If not, a basic naive idea would be to change the loss function of the DICE paper, to include relationships (actually functional relationship) (also note I am using now relationships and not causal relationships, as we are not checking directly causality for now): basically represent the graph with a matrix where you already have some functional relationships, and take the norm of that. Not sure if it is clear, or if it makes sense. But an example is, suppose I have X, Y, Z, and X->Y, Z->Y, so the matrix for the graph is

X Y Z X [0, 1, 0 Y 0, 0, 0, Z 0, 1, 0]

Then, ideally you know the independent variables, as X and Z, and you know that Y = f(X,Z) ~ Y0+(X-X0,Z-Z0)^T grad(f)_0

And there should be a way to include this information in a matrix form, and then take some norm....

Does this make sense? What ideas are around for this?

I also have some question about out of distribution counterfactuals, but let's leave it here.

Temporary solutions?

It's a lot of stuff to ask, but I am still digesting all of this, and I think more clarity from others might be good!

Saladino93 commented 3 years ago

Ok, as a starting point I could use the results of the paper mentioned above, https://arxiv.org/abs/1912.03277 (Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers), as implemented here.

Does anyone know how to use it with sklearn? It seems it accepts only a pytorch model. Maybe there is a way to wrap sklearn models to use them in pytorch.

amit-sharma commented 3 years ago

@Saladino93 the question of obeying (causal) relationships is an important one. Thanks for raising this here. Current research literature has taken a graph-based approach to specifying relationships. However in practice, obtaining the causal graph is a big challenge.

Therefore, in DiCE, we've taken a different approach based on constraints. The idea is that the user can provide constraints on the CF generation, and DiCE uses them in generate_counterfactuals. Currently, we allow the range of each variable and which variable to change, but we plan to add more constraints in the future. However, there is a tradeoff with runtime and efficiency, and also with optimization stability in the presence of many constraints.

The other practical solution is use an unconstrained CF generator in DiCE, generate a lot of CFs, and then apply the constraints post-hoc. This has the benefit that the CF generator has well-understood properties and is guaranteed to give nearby CFs. Then the constraints act like a filtering/ranking layer to select the desired ones. Personally, I would suggest this second approach, since it combines the simplicity of CF generation with arbitrary user-defined constraints.

amit-sharma commented 3 years ago

Re causal constraints paper: Our paper on using causal constraints (the one you refer) does need Pytorch to work, because it uses the gradient of the model. So it is not trivial to extend it to sklearn models.

Saladino93 commented 3 years ago

Thanks a lot @amit-sharma !

My temporary solution for now was:

But I will try the second one you suggested! (If I remember this is also discussed in the causal section of your DICE paper) So, I will generate CFS, and then filter based on DoWhy most important variables, and based on if the 'functional' relationships make sense.

Will make experiments and post here.

Thanks!

tonyabracadabra commented 2 years ago

Thanks a lot @amit-sharma !

My temporary solution for now was:

  • Generate Causal Graph
  • Use something like DoWhy to check for causality, e.g. estimate ACE, for each variable in the model.
  • The most important variables for which I get a non zero effect are then used to constraint the CF generation

But I will try the second one you suggested! (If I remember this is also discussed in the causal section of your DICE paper) So, I will generate CFS, and then filter based on DoWhy most important variables, and based on if the 'functional' relationships make sense.

Will make experiments and post here.

Thanks!

Are you trying to apply post-hoc filtering based on estimations from DoWhy? I assume this will be pretty costly especially when you have a lot of counterfactuals since each of them will formulate a different estimand and estimate them separately