gabrieldemarmiesse / heatmaps

MIT License
60 stars 17 forks source link

Usign the code on your own model #5

Closed abdualhag closed 6 years ago

abdualhag commented 6 years ago

Hi, As I was looking through your code, there is no place where I see you loading a per-trained model and most oddly, no place where you loaded an image to create the heat-map. I am getting confused here. The model which I trained has nothing to do with carts or dog, can i still use your code to generate the heat-map. The code I used to train my model is below and it is using tensorflow:

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K

# dimensions of images.
img_width, img_height = 150, 150

train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 2946
nb_validation_samples = 990
epochs = 50
batch_size = 16

if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

# this is the augmentation configuration I use for training
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# this is the augmentation configuration I use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size)

model.save('first_try_model.h5')
model.save_weights('first_try.h5')
model_json=model.to_json()
with open("model.json","w") as json_file:
    json_file.write(model_json)

Any help is most appreciated.

gabrieldemarmiesse commented 6 years ago

I'll look it up this weekend. Please format your comments with markdown to make them more readable next time. It helps the issue resolution process (I edited your comment with the right formatting). As a quick answer, I load a VGG16, but it can be any model (I'm not sure it can be a sequential model though, I'll have to look it up). in my example, I use an image called "./dog.jpg" but it could be any image, with any class predicted by the mode you're using.

I'll give you more details later, I don't really have the time now.

abdualhag commented 6 years ago

I am sorry the code came out so messed up when I copy past it. I am looking forward to hear which parts of your code are relevant to adopting it to my model.

gabrieldemarmiesse commented 6 years ago

It has been a good opportunity to update this package for Tensorflow. With this piece of code, you should be able to diplay a heatmap of whatever you are classifying. I removed the training as I don't have your data.

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense, Input
from keras import backend as K
from heatmap import to_heatmap
import matplotlib.pyplot as plt
import numpy as np
from keras.preprocessing import image

# dimensions of images.
img_width, img_height = 150, 150

train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 2946
nb_validation_samples = 990
epochs = 50
batch_size = 16

if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)

input_tensor = Input(shape=input_shape)
x = Conv2D(32, (3, 3), activation="relu", input_shape=input_shape)(input_tensor)
x = MaxPooling2D(pool_size=(2, 2))(x)

x = Conv2D(32, (3, 3), activation='relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)

x = Conv2D(64, (3, 3), activation="relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)

x = Flatten()(x)
x = Dense(64, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(1)(x)
x = Activation('sigmoid')(x)

model = Model(input_tensor, x)

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

# Training goes here.

def display_heatmap(new_model, img_path, ids, preprocessing=None):
    # The quality is reduced.
    # If you have more than 8GB of RAM, you can try to increase it.
    img = image.load_img(img_path, target_size=(800, 1280))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    if preprocessing is not None:
        x = preprocess_input(x)

    out = new_model.predict(x)

    heatmap = out[0]  # Removing batch axis.

    if K.image_data_format() == 'channels_first':
        heatmap = heatmap[ids]
        if heatmap.ndim == 3:
            heatmap = np.sum(heatmap, axis=0)
    else:
        heatmap = heatmap[:, :, ids]
        if heatmap.ndim == 3:
            heatmap = np.sum(heatmap, axis=2)

    plt.imshow(heatmap, interpolation="none")
    plt.show()

new_model = to_heatmap(model)
display_heatmap(new_model, "./dog.jpg", 0)

Feel free to ask if something isn't clear.

I had to make your model a functional one as my module doesn't work with sequential models.

Don't forget to close the issue if everything is clear.

abdualhag commented 6 years ago

Very interesting piece of code you wrote over here. Correct if I am mistaken but does not the code requires the repeat of the training process each time I would like to predict the class or generate a heat-map?

abdualhag commented 6 years ago

One more note: what am I importing here and from where?

from heatmap import to_heatmap

abdualhag commented 6 years ago

I also though you I should share with you a piece of code that works for me in a strange way. Note that this piece of code works with models that are trained with either theano or tensorflow where the models were trained by the same exact code I commented at first. It does however requires theano to generate the heat-map.

import numpy as np
np.random.seed(1337)  # for reproducibility

from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
from keras.models import model_from_json
from keras.preprocessing.image import ImageDataGenerator
import keras.preprocessing.image as kimg
from matplotlib import pyplot as plt
import argparse
import cv2
import theano
import os

# Now load json and create model
json_file = open("model.json", 'r')
loaded_model_json = json_file.read()
json_file.close()

model = model_from_json(loaded_model_json)

# load weights into new model
model.load_weights("first_try.h5")

model.compile(loss='categorical_crossentropy',
              optimizer='adadelta',
              metrics=['accuracy'])

# These parameters need to be the same as for *validation* part of training
img_width, img_height = 150, 150
test_datagen = ImageDataGenerator(rescale=1./255)

#Format picture and test it
im_original = cv2.resize(cv2.imread('with.jpg'), (img_width, img_height))

im = np.expand_dims(im_original, axis=0)

prediction = model.predict(im)
print (prediction)
blob_bool=0;

if (prediction<0.2):
    print("I see the blob!")
    blob_bool=1

else:
    print("Blob not found.")
    blob_bool=0

def get_activations(model, layer, X_batch):
    get_activations = K.function([model.layers[3].input, K.learning_phase()], [model.layers[layer].output,])
    activations = get_activations([X_batch,0])
    return activations

if (blob_bool==0):
    im_2 = cv2.resize(cv2.imread('with.jpg'), (img_width, img_height))#data/train/blob/blob-1235.jpg
    im1 = im_2.transpose((2,0,1))
    im1 = np.expand_dims(im1, axis=0)
    feat = get_activations(model,2,im1)#2 works
    plt.imshow(feat[0][0][0])
    plt.savefig("heatmap_real.jpg")
    print("Created heatmap")
elif (os.path.isfile('heatmap_real.jpg')):
    os.remove('heatmap_real.jpg')
gabrieldemarmiesse commented 6 years ago

No, you only need to train the model once. If you pip install the repository, you should be able to import heatmap (this module). I've updated the module for Tensorflow. It should work now. Can you update and tell me if it works for you?

abdualhag commented 6 years ago

I will look more into it later but below is the error I am getting for now. screenshot from 2018-02-09 18-21-08

gabrieldemarmiesse commented 6 years ago

When I was saying pip install, I was referring to the pip install mentioned in the README.md of this github repository. Can you please try :

pip install git+https://github.com/gabrieldemarmiesse/heatmaps.git

abdualhag commented 6 years ago

Sill not working. Maybe because I am using it in anaconda virtual environment. See below for detail. screenshot from 2018-02-09 20-49-39

On a second note, the code which you sent me only define the model but never actually loads the weights or is it that I am messing something. There should be something like: model.load_eights("model_weight.h5")

abdualhag commented 6 years ago

An update. Just used it, the original code on README.md, on another virtual environment where I have theano installed, still getting the same error. My guess is that the virtual environment is causing the errors. screenshot from 2018-02-09 21-04-48

gabrieldemarmiesse commented 6 years ago

Sorry for all this trouble. I think I fixed the issue. Can you uninstall using:

pip uninstall Keras-to-heatmaps

Then install again using:

git clone https://github.com/gabrieldemarmiesse/heatmaps.git
cd heatmaps
pip install -e .

Don't forget the dot after the e. There was an issue with pip, the MANIFEST.in wasn't working at all like I expected. Please let me know how it went on your side.

gabrieldemarmiesse commented 6 years ago

About what you said earlier, if you train the model beforehand, you don't need to load any weights as they are already in the model. But nothing prevents you to save the weights and the model and reload everything later to use the function to_heatmap.

abdualhag commented 6 years ago

I have tried it on both theano and tensorflow environments and here is what happened. In tensorflow, the package would not even install. Note that wheel was already installed as shown. Below is a screen shot.

screenshot from 2018-02-10 11-33-47

Going to theano environment, the package installed with no issue. When I tried to run the code however, I get different errors based on how I load the model. First try:

from keras.models import load_model

model = load_model('My_model.h5')
new_model = to_heatmap(model)
idx = 0  # The index of the class you care about, here the first one.
display_heatmap(new_model, "./with.jpg", idx)

screenshot from 2018-02-10 11-52-55

Second try:

from keras.models import load_model
from keras.models import model_from_json

# Now load json and create model
json_file = open("model.json", 'r')
loaded_model_json = json_file.read()
json_file.close()

model = model_from_json(loaded_model_json)

# load weights into new model
model.load_weights("weights.h5")

model.compile(loss='categorical_crossentropy',
              optimizer='adadelta',
              metrics=['accuracy'])

new_model = to_heatmap(model)
idx = 0  # The index of the class you care about, here the first one.
display_heatmap(new_model, "./with.jpg", idx)

screenshot from 2018-02-10 11-56-28

Third try:

new_model = to_heatmap('My_model.h5')
idx = 0  # The index of the class you care about, here the first one.
display_heatmap(new_model, "./with.jpg", idx)

screenshot from 2018-02-10 11-58-41

I am sorry this dragged on for too long but I would really like to see it working. Not sure where I am going wrong with loading my model. Note that all things done on theano environment were based on the original code.

gabrieldemarmiesse commented 6 years ago

So for the first screenshot, no module named wheel in setuptools is more of a setuptools issue (so it can mean that your environement is in a weird state, I suggest creating a new one).

Then, the issues afterwards are that you are trying to give a sequential model to the function to_heatmap, it's mentioned in the description of this module that to_heatmap doesn't work with sequential models, only functional models.

Also, on the last try, you pass directly a string to the function, but the function expect a keras model.

I suggest creating a new environnement with tensorflow, then reinstall heatmaps, then running your script. But please use a functional model, not a sequential one. I already changed your script to show you how to make a functional model instead of a sequential one.

gabrieldemarmiesse commented 6 years ago

Also, I see that you are not loading the weights before applying to_heatmap. The input to to_heatmap must be a keras model already trained, so with weights from a trained model.

I hope this helps.

abdualhag commented 6 years ago

Well, I created another environment for tensorflow and the installation went fine. However, running the code and executing the last line caused the code to crush as I forgot to include a picture to testing. The issue came right after as when i tried to run the code again I get the following error. I tried to reset the environment and reinstall the package but still getting the same error. Any idea what is going on.

Using TensorFlow backend.
Traceback (most recent call last):
  File "heatmap.py", line 6, in <module>
    from heatmap import to_heatmap
  File "/home/abdu/anaconda2/envs/gpuTensorflow/heatmap.py", line 6, in <module>
    from heatmap import to_heatmap
ImportError: cannot import name to_heatmap

By last line, I mean in the following code.

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense, Input
from keras import backend as K
from heatmap import to_heatmap
import matplotlib.pyplot as plt
import numpy as np
from keras.preprocessing import image

# dimensions of images.
img_width, img_height = 150, 150

train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 2946
nb_validation_samples = 990
epochs = 10
batch_size = 16

if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)

input_tensor = Input(shape=input_shape)
x = Conv2D(32, (3, 3), activation="relu", input_shape=input_shape)(input_tensor)
x = MaxPooling2D(pool_size=(2, 2))(x)

x = Conv2D(32, (3, 3), activation='relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)

x = Conv2D(64, (3, 3), activation="relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)

x = Flatten()(x)
x = Dense(64, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(1)(x)
x = Activation('sigmoid')(x)

model = Model(input_tensor, x)

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

# Training goes here.

# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size)

model.save('first_try_model.h5')
model.save_weights('first_try.h5')
model_json=model.to_json()
with open("model.json","w") as json_file:
    json_file.write(model_json)

def display_heatmap(new_model, img_path, ids, preprocessing=None):
    # The quality is reduced.
    # If you have more than 8GB of RAM, you can try to increase it.
    img = image.load_img(img_path, target_size=(800, 1280))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    if preprocessing is not None:
        x = preprocess_input(x)

    out = new_model.predict(x)

    heatmap = out[0]  # Removing batch axis.

    if K.image_data_format() == 'channels_first':
        heatmap = heatmap[ids]
        if heatmap.ndim == 3:
            heatmap = np.sum(heatmap, axis=0)
    else:
        heatmap = heatmap[:, :, ids]
        if heatmap.ndim == 3:
            heatmap = np.sum(heatmap, axis=2)

    plt.imshow(heatmap, interpolation="none")
    plt.show()

new_model = to_heatmap(model)
display_heatmap(new_model, "./with.jpg", 0)
abdualhag commented 6 years ago

To get you updated. I just created another virtual environment and same error occurred from the beginning. The issue however does not appear in theano environment.

gabrieldemarmiesse commented 6 years ago

I don't really see how adding a picture can cause the script to fail earlier.

abdualhag commented 6 years ago

Exactly what I though. My guess is that since the code crushed before finishing, somehow it caused the the heatmap package to be in a weird state. But I am not sure why would this be carried to other virtual environment but theano virtual environment.

gabrieldemarmiesse commented 6 years ago

If the error appears only in one environnement, it may be an issue with the python and environnement installation. I can't reproduce the error and trying to debug an error that a developer can't reproduce is notoriously difficult, or even impossible. if you can't do:

git clone https://github.com/gabrieldemarmiesse/heatmaps.git
cd heatmaps
pip install -e .
cd ../..
python -c "import heatmap"

without failing, then I can't really help you.

This way of installing works on my end so I don't know how to help you.

abdualhag commented 6 years ago

So I was able to recreate the same error with theano environment. Let me correct myself, the issue has nothing to do with virtual environments. Once you got the error in one environment which uses theano (tensorflow), all other environment that uses theano (tensorflow) will have the same error.

Let me demonstrate by the following. As you could see, heatmap worked before running the following code and stopped working after.


from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense, Input
from keras import backend as K
from heatmap import to_heatmap
import matplotlib.pyplot as plt
import numpy as np
from keras.preprocessing import image
from keras.models import load_model

def display_heatmap(new_model, img_path, ids, preprocessing=None):
    # The quality is reduced.
    # If you have more than 8GB of RAM, you can try to increase it.
    img = image.load_img(img_path, target_size=(800, 1280))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    if preprocessing is not None:
        x = preprocess_input(x)

    out = new_model.predict(x)

    heatmap = out[0]  # Removing batch axis.

    if K.image_data_format() == 'channels_first':
        heatmap = heatmap[ids]
        if heatmap.ndim == 3:
            heatmap = np.sum(heatmap, axis=0)
    else:
        heatmap = heatmap[:, :, ids]
        if heatmap.ndim == 3:
            heatmap = np.sum(heatmap, axis=2)

    plt.imshow(heatmap, interpolation="none")
    plt.show()

model = load_model('first_try_model.h5')
new_model = to_heatmap(model)
display_heatmap(new_model, "./with.jpg", 0)

Also shown below. screenshot from 2018-02-10 16-33-57

On the other hand, the earlier code which I used to train the model had no issue and actually gave me back the heat map.


from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense, Input
from keras import backend as K
from heatmap import to_heatmap
import matplotlib.pyplot as plt
import numpy as np
from keras.preprocessing import image

# dimensions of images.
img_width, img_height = 150, 150

train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 2946
nb_validation_samples = 990
epochs = 10
batch_size = 16

if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)

input_tensor = Input(shape=input_shape)
x = Conv2D(32, (3, 3), activation="relu", input_shape=input_shape)(input_tensor)
x = MaxPooling2D(pool_size=(2, 2))(x)

x = Conv2D(32, (3, 3), activation='relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)

x = Conv2D(64, (3, 3), activation="relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)

x = Flatten()(x)
x = Dense(64, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(1)(x)
x = Activation('sigmoid')(x)

model = Model(input_tensor, x)

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

# Training goes here.

# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size)

model.save('first_try_model.h5')
model.save_weights('first_try.h5')
model_json=model.to_json()
with open("model.json","w") as json_file:
    json_file.write(model_json)

def display_heatmap(new_model, img_path, ids, preprocessing=None):
    # The quality is reduced.
    # If you have more than 8GB of RAM, you can try to increase it.
    img = image.load_img(img_path, target_size=(800, 1280))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    if preprocessing is not None:
        x = preprocess_input(x)

    out = new_model.predict(x)

    heatmap = out[0]  # Removing batch axis.

    if K.image_data_format() == 'channels_first':
        heatmap = heatmap[ids]
        if heatmap.ndim == 3:
            heatmap = np.sum(heatmap, axis=0)
    else:
        heatmap = heatmap[:, :, ids]
        if heatmap.ndim == 3:
            heatmap = np.sum(heatmap, axis=2)

    plt.imshow(heatmap, interpolation="none")
    plt.show()

new_model = to_heatmap(model)
display_heatmap(new_model, "./with.jpg", 0)

The heatmap generated by your code.

figure_1

The heatmap generated by my code.

heatmap_real

The original image.

with

As you could see, I really desire a heatmap such as the one generated by your code, except that it doesn't really mark the object, the white blob on the right-top side. For now, my system is acting weird for both theano and tensoflow environments when I import heatmap. I might just need to reinstall anaconda but it would be great not to have it crash next time.

Thanks and sorry for putting you through so much trouble.

gabrieldemarmiesse commented 6 years ago

Concerning the fact that heatmap stopped working after using the script once, I have no idea what is causing this. I tried your script (while removing the training part) and it didn't stop working afterwards.

But the fact that the heatmap isn't correct may be a bug in the package. It's possible that I have to change the way weights are reshaped for theano and tensorflow. I'm going to investigate that.

gabrieldemarmiesse commented 6 years ago

And no worries, you're helping me find bugs, which is always good for an open source program.

gabrieldemarmiesse commented 6 years ago

I just tested the code with Theano and Tensorflow and everything is working. The heatmap is correct. I can only advise you to run the file heatmaps/examples/demo.py to check that it works before trying on your own script. If demo.py works but the heatmaps you obtain with your script aren't good, it means that the classifier accuracy (the one of the keras model) isn't good enough. Technically, my module just does an efficient sliding window procedure, so the result is only as good as the classifier that was provided.

abdualhag commented 6 years ago

So the problem is solved and you would not believe what was the error. It was really nothing but the fact that my script file name was heatmpa.py which apparently cause it to crash when importing your package. I figured it out when I noted that the demo works fine on either environment regardless of how many times I run it. The result is still far from satisfying but I guess just need to train my model a little more. Thanks for all the help and I hope the best for you.

gabrieldemarmiesse commented 6 years ago

No worries, it was a good opportunity for me to rework on the code and make it available for Tensorflow, and I'm glad you found the issue, good luck with your project!

neild0 commented 6 years ago

Hey, I'm having a similar issue for this code. I'm trying to get a .pb graph or a checkpoint file to create a heatmap. Would this be possible?

gabrieldemarmiesse commented 6 years ago

My script works only with keras models and not directly with tensorflow (my script recognizes the keras layers, recognizing the tensorflow operations would be much more complicated).

So you need an object that is a keras model.

When using the standard keras model.save() and models.load_model() this is trivial. It is also easy to export a keras model to a .pb. Though I never tried to convert a .pb graph to a keras model, and I don't know if it's possible.

I would say that the main issue in your case is converting a .pb into a keras model, and this issue is independent of the package I provide here. So there is not much I can help you with I'm afraid. To work around the issue, maybe you can do it like this:

training the CNN -> save it with model.save() -> load it with the keras models.load_model -> get a new model by using to_heatmap, this new model will produce heatmaps -> save the new model as a .pb.

timontimo commented 5 years ago

Hello, First of all I would like to thank you for the discussions and the code, even if it doesn't work for me yet :D I copied the code from abdualhag and applied it to my problem. Unfortunately my heatmap is not usable. But the setup file works fine.

The predicted out array contains only ones, but I don't think, that every pixel is important. Does anyone know what could be my mistake?

`

from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential, Model from keras.layers import Conv2D, MaxPooling2D from keras.layers import Activation, Dropout, Flatten, Dense, Input from keras import backend as K from heatmap_TB import to_heatmap import matplotlib.pyplot as plt import numpy as np from keras.preprocessing import image

dimensions of images.

img_width, img_height = 494, 494

train_data_dir = 'C:/Users/btn2bue/Desktop/CNN/Versuch_75/train' validation_data_dir = 'C:/Users/btn2bue/Desktop/CNN/Versuch_75/validation' nb_train_samples = 500 nb_validation_samples = 100 epochs = 3 batch_size = 10

494, 494, 3

if K.image_data_format() == 'channels_first': input_shape = (3, img_width, img_height) else: input_shape = (img_width, img_height, 3)

input_tensor = Input(shape=input_shape) x = Conv2D(32, (3, 3), activation="relu", input_shape=input_shape)(input_tensor) x = MaxPooling2D(pool_size=(2, 2))(x)

x = Conv2D(32, (3, 3), activation='relu')(x) x = MaxPooling2D(pool_size=(2, 2))(x)

x = Conv2D(64, (3, 3), activation="relu")(x) x = MaxPooling2D(pool_size=(2, 2))(x)

x = Flatten()(x) x = Dense(64, activation='relu')(x) x = Dropout(0.5)(x) x = Dense(1)(x) x = Activation('sigmoid')(x)

model = Model(input_tensor, x)

model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

Training goes here.

this is the augmentation configuration we will use for training

train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True)

this is the augmentation configuration we will use for testing:

only rescaling

test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary')

validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary')

model.fit_generator( train_generator, steps_per_epoch=nb_train_samples // batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=nb_validation_samples // batch_size)

model.save('first_try_model.h5') model.save_weights('first_try.h5') model_json=model.to_json() with open("model.json","w") as json_file: json_file.write(model_json)

def display_heatmap(new_model, img_path, ids, preprocessing=None):

The quality is reduced.

# If you have more than 8GB of RAM, you can try to increase it.
img = image.load_img(img_path, target_size=(800,1280))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
if preprocessing is not None:
    x = preprocess_input(x)

out = new_model.predict(x)
o1,o2,o3,o4=out.shape
print('o1: '+str(o1)+' o2: '+str(o2)+' o3: '+str(o3)+' o4: '+str(o4))
print('out: ')
print(out)
heatmap = out[0]  # Removing batch axis.

if K.image_data_format() == 'channels_first':
    heatmap = heatmap[ids]
    if heatmap.ndim == 3:
        heatmap = np.sum(heatmap, axis=0)
else:
    heatmap = heatmap[:, :, ids]
    if heatmap.ndim == 3:
        heatmap = np.sum(heatmap, axis=2)

plt.imshow(heatmap, interpolation="none")
plt.show()

new_model = to_heatmap(model) display_heatmap(new_model, "C:/Users/btn2bue/Desktop/CNN/1_bad_3.bmp", 0) `

timontimo commented 5 years ago

This is how my Heatmap looks like. If I use the real size of the Images for the target size I get only one prediction for my image (img = image.load_img(img_path, target_size=(494,494)) ).

Plot

timontimo commented 5 years ago

Okay, I think it works now. It is important to play a little with the target size of the picture you load in case you get such a heatmap like I got.