XAI-ANITI / ethik

:mag_right: A toolbox for fair and explainable machine learning
https://xai-aniti.github.io/ethik/
GNU General Public License v3.0
53 stars 5 forks source link

Check if the distribution is a dirac #94

Closed Vayel closed 5 years ago

Vayel commented 5 years ago

@MaxHalford for ImageExplainer, it's always the case:

image

Both quantiles are equal to zero.

Is it expected?

MaxHalford commented 5 years ago

Do you mean that bias_low == bias_high even when n_samples > 1?

Vayel commented 5 years ago

I mean that bias(tau = -1) == bias(tau = 1). But it's because I only checked black pixels...

image

It still raises the question of raising a warning when we have a dirac distribution (which is the case for pixels that are always black across samples).

MaxHalford commented 5 years ago

Yeah indeed, there is no variation whatsoever if the pixels are black.

Vayel commented 5 years ago

To avoid having the warning for images, we can just have it for local bias. Actually it makes sense because if the whole dataset is a dirac, we can consider it's the user's mistake. But for local bias, we sample a subset of the dataset so we need to make sure we do it properly.

MaxHalford commented 5 years ago

We should issue a warning when a variable has only one unique value, because typically that is an edge-case. However, for images this case arises quite regularly, and so we shouldn't issue a warning.

It seems that we can catch warnings in Python and suppress them. See this StackOverflow post. I think that we can use the category parameter in the warnings.simplefilter.