Closed marcotcr closed 5 years ago
Hi @marcotcr, If I used a image preprocessed by resnet50
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
model = ResNet50(weights='imagenet')
img_path = 'cat.jpg'
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
#preds = model.predict(x)
print x
print('Predicted:', decode_predictions(preds, top=3)[0])
plt.imshow(x[0]/255)
the code above will give an error likes this:
How do I visualize this image with plt.imshow? Thank you
That code works for me, but the image looks funny because of the preprocessing. If you run imshow before preprocess_input, it looks fine. The way to use LIME with resnet is to have a predict_fn like the following:
def predict_fn(x):
model.predict(preprocess_input(x))
I am also struggling with understanding precisely what should be used as image. I am using some of the tensor-flow mobilenet retraining code. My predict function takes output from the following:
`def read_rawimage_from_image_file(file_name, input_height=299, input_width=299, input_mean=0, input_std=255): input_name = "file_reader" output_name = "image_reader" file_reader = tf.read_file(file_name, input_name) if file_name.endswith(".png"): image_reader = tf.image.decode_png(file_reader, channels = 3, name='png_reader') elif file_name.endswith(".gif"): image_reader = tf.squeeze(tf.image.decode_gif(file_reader, name='gif_reader')) elif file_name.endswith(".bmp"): image_reader = tf.image.decode_bmp(file_reader, name='bmp_reader') else: image_reader = tf.image.decode_jpeg(file_reader, channels = 3, name='jpeg_reader') sess = tf.Session() result = sess.run(image_reader)
return result`
e.g. a tensorflow decoded imagefile. However, I get the following error from LIME
Traceback (most recent call last):
File "/home/cait/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/cait/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/cait/tensorflow-for-poets-2/scripts/label_image_fcn.py", line 243, in
Can you please tell me what I should be using instead of the raw image binary for input so that I can use both the mobilenet scripts as well as LIME. Thanks.
I'm guessing you called explain_instance with an image that is [1, *, *, *], when it should be [*, * \,*].
Different neural networks may preprocess images differently (e.g. resnet vs inception). Our image explainer assumes the image is in a format that skimage can understand. In the tutorial, we should remove inception-specific stuff and just use a normal image as input, with preprocessing + prediction inside the prediction function.