SeldonIO / alibi

Algorithms for explaining machine learning models
https://docs.seldon.io/projects/alibi/en/stable/
Other
2.4k stars 251 forks source link

Counterfactual search using gradient-free optimization #176

Closed jklaise closed 3 years ago

jklaise commented 4 years ago

Currently the optimization for counterfactual search uses gradient descent both in the white-box (for TF and keras models) and black-box (using numerical gradients) case. However, this is far from ideal for models/prediction functions which are not inherently differentiable, e.g. tree ensembles. In this case one could use more appropriate gradient-free methods (e.g. CMA-ES).

This is a long term issue as it is not clear how the code should be structured to easily factor out different optimizers.

jklaise commented 4 years ago

This paper uses a genetic algorithm: https://arxiv.org/abs/2004.11165

InterferencePattern commented 3 years ago

It seems that this is an issue for CEMs as well, as the numerical gradients for both often fail to find a suitable Pertinent Negatives in particular.

jklaise commented 3 years ago

@jimbudarz that's correct. We're currently doing some research on gradient-free methods for this. This is very much research in progress so we can't promise an integration in the near future, but we do have @RobertSamoilescu working on this as an internship project, so we hope to make some progress!

jklaise commented 3 years ago

Closing as this is solved by #457.