marcotcr / lime

Lime: Explaining the predictions of any machine learning classifier
BSD 2-Clause "Simplified" License
11.64k stars 1.81k forks source link

Cannot use lime with grayscale images #661

Open rEstela opened 2 years ago

rEstela commented 2 years ago

Hello Marco!

I have an issue with grayscale images. If I understand correctly, when using grayscale what I gotta do is:

1 - Convert image (1 x img_width x img_height x 1) to (img_width x img_height) image = x_img[0, :, :, 0]

2 - Define a function to convert image to (img_width x img_heigth) back to (1 x img_width x img_heigth x 1) def new_predict_fn(img) img = img[np.newaxis, ..., np.newaxis] return model.predict(img)

3 - Run lime explainer = lime_image.LimeImageExplainer() explanation = explainer.explain_instance(image, new_predict_fn(image), top_labels=5, hide_color=0, num_samples=250)

However, when I do this it starts to run but it stops at 9/250 and I got:

TypeError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_6332/722228459.py in 9 print('image shape: ', image.shape) 10 ---> 11 explanation = explainer.explain_instance(image,\ 12 new_predict_fn(image), top_labels=5,\ 13 hide_color=0, num_samples=250)

~\anaconda3\lib\site-packages\lime\lime_image.py in explain_instance(self, image, classifier_fn, labels, hide_color, top_labels, num_features, num_samples, batch_size, segmentation_fn, distance_metric, model_regressor, random_seed) 196 top = labels 197 --> 198 data, labels = self.data_labels(image, fudged_image, segments, 199 classifier_fn, num_samples, 200 batch_size=batch_size)

~\anaconda3\lib\site-packages\lime\lime_image.py in data_labels(self, image, fudged_image, segments, classifier_fn, num_samples, batch_size) 259 imgs.append(temp) 260 if len(imgs) == batch_size: --> 261 preds = classifier_fn(np.array(imgs)) 262 labels.extend(preds) 263 imgs = []

TypeError: 'numpy.ndarray' object is not callable

agarcia-ruiz commented 2 years ago

Hi, I had a similar issue and it was because my model.predict(img) returned an array within an array, so I just did model.predict(img)[0] instead. Don't know if this is your issue though, but it might help to check the output shape of the predict call.

Rabia3 commented 2 years ago

LIME can not explain the grayscale images. You have to convert your grayscale images into RGB images.

RaffaeleBerzoini commented 1 year ago

Hi, I had the same problem some hours ago. Do not know if you still need this but this is my workaround

def predict4lime(img2):
    #  print(img2.shape)
    return model.predict(img2[:, :, :, 0])

exp = explainer.explain_instance(img[0, :, :, 0], predict4lime, top_labels=1, hide_color=0, num_samples=1000)

In my script img is a (1, 256, 256, 1). in explain_instance i'm giving it as a (256, 256).

The predict4lime is the normal prediction but only on the first channel. Inside explain_instance they call gray2rgb of skimage. This function simply take the gray image and copy it on three channel so you know that each channel of the new rgb image is simply your original gray image.

You might be wondering why am i giving a (256, 256) and then accessing four indexes instead of three in predict4lime. Well the fourth dimension is created by explain_instance after the gray2rgb call. The first dimension is also generated by explain_instance. You can check this by uncommenting the print inside predict4lime.