Open barnardp opened 2 years ago
Hi @barnardp,
Thanks for opening this issue. I ran some experiments locally for the Tensorflow backend. The loading and saving functionalities seem to work well, but there might be two things that I can think of which can cause that drop in performance:
make sure that you are using the same prediction model and you are not training a new one. The counterfactual generator is bound to the model. Thus, if you train a new model, you need to train a new counterfactual generator because the old one might not work anymore.
I observed that when we return the explanation object, the labels for the counterfactuals stored in explanation.data['cf']['class']
(we denote those by y_cf
) have a shape of (N, 1)
, where N
is the number of data instances for which a counterfactual was generated. Since the true target might have the shape (N, )
(we denote those by y_target
), it is possible that if you compare them as y_cf == y_target
you would end up with a tensor of shape (N, N)
which compares every counterfactual class with every target class, which will lead to that "drop" in performance if you take the mean of the (N, N)
tensor. We will try to fix this shortly to have more consistency in the tensor shapes. In the mean time, if this is the case for you, a simple solution would be to compare them as y_cf.flatten() == y_target
, or just flatten both.
If none of the above cause this drop in performance, it would be great if you can share the script or the notebook with us so we can replicate the issue locally and address it. Thank you!
Hi,
Thanks for the great package! I've recently been exploring the CounterfactualRL approach. One problem I've noticed is that whenever I save a trained explainer and then load it back later, the success rate (measured in terms of how well the explainer is able to produce counterfactuals that actually lead the model to change its prediction to the target class) of the loaded explainer reduces dramatically. For example, my original explainer tends to get a success rate of over 95%, while that of the loaded one typically falls under 50%. Is there any way in which this can be avoided?
Cheers, Pieter