DBrughmans / NICE

Apache License 2.0
22 stars 10 forks source link

Questions #7

Closed lnilya closed 9 months ago

lnilya commented 10 months ago

1. Nearest Neighbours

From what I understand the algorithm always finds the nearest neighbour of opposite class in the training data.

This makes the counterfactuals always deterministic and also you only get a single one.

Is there any contradiction to pick M nearest unlike neighbours that optimize a certain criterion. e.g. they are as far apart from one another as possible.

Now running the rest of NICE as is but for the different neighbours would yield a diverse set of counterfactuals? What do you think of that?

Do you think there is a simpler version to do this? Maybe one could generate NICE solution for all M neighbours and then use the proximity/sparsity/plausibility reward functions to pick out a diverse set of K cf examples?

2. Justified CF

What does the justified CF parameter do?

3. Auto Encoder

What is the auto encoder parameter? It seems to be required for plausibility type CF there seems to be no explanation?

Thank you so much for your input

DBrughmans commented 10 months ago

Hi Inilya,

1. Nearest Neighbours

From what I understand the algorithm always finds the nearest neighbour of opposite class in the training data.

There are four versions of NICE. They al start by picking the nearest unlike neighbour (NUN). 1) Version "none" stops here, and just provides the NUN as counterfactual. 2) Version "sparsity" tries to find a combination of the NUN and the instance to explain that changes the least possible features in the instance to explain. (it optimizes the L0-distance) 2) Version "proximity" can be used to optimize for other distance metrics. The code currently includes HEOM as a distance metric. But you can build you can customize it for other distance metrics if needed 3 Version "plausibility" optimizes the auto-encoder loss. Sometimes sparse or proxy counterfactuals are very unrealistic. In other words they do not respect the underlying distribution of your data. The plausibility version tries to solve this problem. If you're interested in plausibility, the "none" version is also interesting. They provide real instances from the dataset which are by definition plausible.

This makes the counterfactuals always deterministic and also you only get a single one.

Yes, the code is deterministic. Both the NUN selection and greedy optimization method have no stochastic factors.

Is there any contradiction to pick M nearest unlike neighbours that optimize a certain criterion. e.g. they are as far apart from one another as possible. Now running the rest of NICE as is but for the different neighbours would yield a diverse set of counterfactuals? What do you think of that? Do you think there is a simpler version to do this? Maybe one could generate NICE solution for all M neighbours and then use the proximity/sparsity/plausibility reward functions to pick out a diverse set of K cf examples?

I think it's a valid method to generate diverse counterfactuals. Depending on your dataset, the explanations might still be very similar. Especially for the sparsity and proximity version. If this is not preferred, you could impose diversity on the different explanations by defining a new distance function which includes the inverse distance to other explanations and use this in the proximity version of NICE.

2. Justified CF If the justified CF parameter is true, we filter the list of possible NUNs to those who are correctly classified by the model (y=y_hat). This seems to improve plausibility and cross model robustness.

3. Auto encoder If you use the plausibility version, you have to train an auto encoder on your training data and provide it to the auto_encoder parameter when you initialise the class. If you look at the history of this repo, you can find an AE in previous versions of the code (https://github.com/DBrughmans/NICE/blob/5e4e4522b9227b5d415751a2ae3f6c8161457946/NICE/utils/AE.py). I excluded this in newer versions to lose the dependency of the nicex package on tensorflow.