SeldonIO / alibi

Algorithms for explaining machine learning models
https://docs.seldon.io/projects/alibi/en/stable/
Other
2.4k stars 251 forks source link

Why are my Anchor Images just a black color image? #629

Open krishnakripaj opened 2 years ago

krishnakripaj commented 2 years ago

I have a CNN that performs a classification operation (classifying an image into 1 of two classes - "Fake" or "Real") and I want to obtain the anchors for a coupe of images' predictions. The anchor images are produced correctly for images of "Fake" class while not for "Real" class. I can see the presence of the superpixels in the anchor image with for an image which falls into "Fake" class, but I get an empty black background anchor image (with no super pixels) for images classified into "Real" class. Why am I not getting an anchor image with superpixels that satisfied the "Real" prediction?

Here's my code:

def predict_fn(x): return model.predict(x)
image_shape = (224, 224, 3)
segmentation_fn = 'slic'
args = {'sigma': .5}
explainer = AnchorImage(predict_fn,
                        image_shape,
                        segmentation_fn=segmentation_fn,
                        segmentation_kwargs=args,
                        images_background=None)

explanation_img = explainer.explain(image,
                                            threshold=0.80,
                                            p_sample=0.5,
                                            tau=0.25)
explanation_anchor = explanation_img.anchor
mauicv commented 2 years ago

Dear @krishnakripaj, Thanks for opening the issue. This sounds a lot like this. What does your dataset look like? How balanced is it?

krishnakripaj commented 2 years ago

Dear @krishnakripaj, Thanks for opening the issue. This sounds a lot like this. What does your dataset look like? How balanced is it?

That's what I initially thought too. But my train dataset is balanced. I have about 21k images each in both fake and real classes.

mauicv commented 2 years ago

hmmm, I think it's still likely to be one of:

Is it possible to share with me a working example so I can investigate? Does it return empty anchors for all real instances or just some?

krishnakripaj commented 2 years ago

I did a few tests. I used the same sample data (couple of image frames from a video) and tried the anchor algorithm a few times without changing any parameters. Majority of times I happen to get black images. But in rare occasions I got (really small) anchors. This behavior is unpredictable though.

These were the parameters I had used for the segmentation: args = {'n_segments': 35, 'compactness': 18, 'sigma': .5}

In the instances that I get the black images, I tried to look at the verbose logs and the precision and coverage. When the black image is generated, it gives me the precision as an array like below

Precision: [1.] Coverage: 1

In almost all the cases, the coverage and precision is exactly 1. I also dont get any logs.

When I did manage to get the small anchors, I got the verbose logs as normal.

Best of size  1 : 21 1.0 0.9574840720116412 1.0 (21,) mean = 1.00 lb = 0.96 ub = 1.00 coverage: 0.50 n: 53
Found eligible result  (21,) Coverage: 0.5022 Is best? True
Precision: 1.0 Coverage: 0.5022 

Best: 0 (mean:1.0000000000, n: 48, lb:0.7423) Worst: 12 (mean:0.9767, n: 43, ub:1.0000) B = 0.26
Best: 20 (mean:1.0000000000, n: 45, lb:0.7145) Worst: 1 (mean:0.9767, n: 43, ub:1.0000) B = 0.29
Best: 31 (mean:1.0000000000, n: 43, lb:0.6956) Worst: 33 (mean:0.9787, n: 47, ub:1.0000) B = 0.30
Best: 30 (mean:1.0000000000, n: 47, lb:0.7122) Worst: 15 (mean:0.9792, n: 48, ub:1.0000) B = 0.29
Best of size  1 : 30 1.0 0.9844571887811345 1.0 (30,) mean = 1.00 lb = 0.98 ub = 1.00 coverage: 0.50 n: 147
Precision: 1.0 Coverage: 0.4961

When I reduced the n_segments to a smaller number like 15 however, the chances of me ending up with anchor images is better.

But I am really confused, why isn't my model accurately generating the anchors for the "real" class? It works well with the "fake" class.

Btw, thank you so much for taking your time to look into this question and provide your valuable help.

krishnakripaj commented 2 years ago

I am also attaching a sample response of the explainer.explain method for an instance where the anchor is empty.

Explanation(meta={
  'name': 'AnchorImage',
  'type': ['blackbox'],
  'explanations': ['local'],
  'params': {
              'custom_segmentation': False,
              'segmentation_kwargs': {
                                       'n_segments': 35,
                                       'compactness': 18,
                                       'sigma': 0.5}
                                     ,
              'p_sample': 0.5,
              'seed': None,
              'image_shape': (224, 224, 3),
              'images_background': None,
              'segmentation_fn': 'slic',
              'threshold': 0.95,
              'delta': 0.1,
              'tau': 0.25,
              'batch_size': 100,
              'coverage_samples': 10000,
              'beam_size': 1,
              'stop_on_first': False,
              'max_anchor_size': None,
              'min_samples_start': 100,
              'n_covered_ex': 10,
              'binary_cache_size': 10000,
              'cache_margin': 1000,
              'verbose': 'True',
              'verbose_every': 1,
              'kwargs': {}}
            ,
  'version': '0.6.5'}
, data={
  'anchor': array([[[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        ...,
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]],

       [[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        ...,
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]],

       [[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        ...,
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]],

       ...,

       [[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        ...,
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]],

       [[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        ...,
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]],

       [[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        ...,
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]]]),
  'segments': array([[ 1,  1,  1, ...,  6,  6,  6],
       [ 1,  1,  1, ...,  6,  6,  6],
       [ 1,  1,  1, ...,  6,  6,  6],
       ...,
       [33, 33, 33, ..., 29, 29, 29],
       [33, 33, 33, ..., 29, 29, 29],
       [33, 33, 33, ..., 29, 29, 29]], dtype=int64),
  'precision': array([1.]),
  'coverage': 1,
  'raw': {
           'feature': [],
           'mean': [],
           'num_preds': 100,
           'precision': [],
           'coverage': [],
           'examples': [],
           'all_precision': array([1.]),
           'success': True,
           'instance': array([[[0.09019608, 0.15294118, 0.10588235],
        [0.09019608, 0.15294118, 0.10588235],
        [0.09019608, 0.15294118, 0.10588235],
        ...,
        [0.14117648, 0.18039216, 0.14901961],
        [0.14117648, 0.18039216, 0.14901961],
        [0.14117648, 0.18039216, 0.14901961]],

       [[0.09019608, 0.15294118, 0.10588235],
        [0.09019608, 0.15294118, 0.10588235],
        [0.09019608, 0.15294118, 0.10588235],
        ...,
        [0.14117648, 0.18039216, 0.14901961],
        [0.14117648, 0.18039216, 0.14901961],
        [0.14117648, 0.18039216, 0.14901961]],

       [[0.09019608, 0.15294118, 0.10588235],
        [0.09019608, 0.15294118, 0.10588235],
        [0.09019608, 0.15294118, 0.10588235],
        ...,
        [0.14117648, 0.18039216, 0.14901961],
        [0.14117648, 0.18039216, 0.14901961],
        [0.14117648, 0.18039216, 0.14901961]],

       ...,

       [[0.11372549, 0.11764706, 0.13333334],
        [0.11372549, 0.11764706, 0.13333334],
        [0.11372549, 0.11764706, 0.13333334],
        ...,
        [0.13725491, 0.14901961, 0.18039216],
        [0.13725491, 0.14901961, 0.18039216],
        [0.13725491, 0.14901961, 0.18039216]],

       [[0.11372549, 0.11764706, 0.13333334],
        [0.11372549, 0.11764706, 0.13333334],
        [0.11372549, 0.11764706, 0.13333334],
        ...,
        [0.13725491, 0.14901961, 0.18039216],
        [0.13725491, 0.14901961, 0.18039216],
        [0.13725491, 0.14901961, 0.18039216]],

       [[0.11764706, 0.11764706, 0.13333334],
        [0.11764706, 0.11764706, 0.13333334],
        [0.11764706, 0.11764706, 0.13333334],
        ...,
        [0.13725491, 0.14901961, 0.18039216],
        [0.13725491, 0.14901961, 0.18039216],
        [0.13725491, 0.14901961, 0.18039216]]], dtype=float32),
           'instances': array([[[[0.09019608, 0.15294118, 0.10588235],
         [0.09019608, 0.15294118, 0.10588235],
         [0.09019608, 0.15294118, 0.10588235],
         ...,
         [0.14117648, 0.18039216, 0.14901961],
         [0.14117648, 0.18039216, 0.14901961],
         [0.14117648, 0.18039216, 0.14901961]],

        [[0.09019608, 0.15294118, 0.10588235],
         [0.09019608, 0.15294118, 0.10588235],
         [0.09019608, 0.15294118, 0.10588235],
         ...,
         [0.14117648, 0.18039216, 0.14901961],
         [0.14117648, 0.18039216, 0.14901961],
         [0.14117648, 0.18039216, 0.14901961]],

        [[0.09019608, 0.15294118, 0.10588235],
         [0.09019608, 0.15294118, 0.10588235],
         [0.09019608, 0.15294118, 0.10588235],
         ...,
         [0.14117648, 0.18039216, 0.14901961],
         [0.14117648, 0.18039216, 0.14901961],
         [0.14117648, 0.18039216, 0.14901961]],

        ...,

        [[0.11372549, 0.11764706, 0.13333334],
         [0.11372549, 0.11764706, 0.13333334],
         [0.11372549, 0.11764706, 0.13333334],
         ...,
         [0.13725491, 0.14901961, 0.18039216],
         [0.13725491, 0.14901961, 0.18039216],
         [0.13725491, 0.14901961, 0.18039216]],

        [[0.11372549, 0.11764706, 0.13333334],
         [0.11372549, 0.11764706, 0.13333334],
         [0.11372549, 0.11764706, 0.13333334],
         ...,
         [0.13725491, 0.14901961, 0.18039216],
         [0.13725491, 0.14901961, 0.18039216],
         [0.13725491, 0.14901961, 0.18039216]],

        [[0.11764706, 0.11764706, 0.13333334],
         [0.11764706, 0.11764706, 0.13333334],
         [0.11764706, 0.11764706, 0.13333334],
         ...,
         [0.13725491, 0.14901961, 0.18039216],
         [0.13725491, 0.14901961, 0.18039216],
         [0.13725491, 0.14901961, 0.18039216]]]], dtype=float32),
           'prediction': array([1], dtype=int64)}
         }
)
mauicv commented 2 years ago

Hey @krishnakripaj, So I think what's happening is that the algorithm first tries the empty anchor and determines that its precision is greater than the requested precision threshold. This is why you're getting an empty anchor with precision and coverage both equal to 1. I'm not sure why your model is likely to predict the image is real on the basis of the images sampled from the empty anchor but not for the fake data. I'd really need to know more about the data and the model. Can you show me some examples from the dataset of both real and fake images?

Otherwise, there are some things you can try:

  1. Increase p_sample: The default implementation sets the p_sample parameter to 0.5 which means when sampling from an anchor, it'll perturb superpixels not in the anchor with a probability 0.5. This can mean that even for empty anchors the samples drawn are still quite like the original image. Try setting p_sample=1 and see if that changes anything. (This will mean any superpixel not in the anchor will definitely be perturbed)
  2. Use background images: The algorithm defaults to just setting superpixels to their average values if they're perturbed. There is also an option to replace them with pixel values from images drawn from the dataset. (See the images_background parameter)
  3. Increase the threshold: If you set the threshold precision higher then the empty anchor is less likely to obtain that precision.

Can you try the above and let me know if anything changes?