raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.98k stars 660 forks source link

Unexpected Keyword Argument 'jitter' #18

Closed DylanCope closed 7 years ago

DylanCope commented 7 years ago

I got an error when trying to generate an attention map using visualize_saliency. On line 153 of visualization.py there is unexpected keyword argument jitter.

eric-tramel commented 7 years ago

I'm having the same problem, using Anaconda with a Python 3.5 virtual environment running, so I'm also making use of OpenCV3. Here is the output and offending line:

Working on filters: [3]
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-15-4d1a18352923> in <module>()
      4 
      5 
----> 6 heatmap = visualize_saliency(faceNet, 7, yclas[idx], X[idx,:,:,:], text=names[yclas[idx]])

/Users/eric/anaconda/envs/tf35/lib/python3.5/site-packages/keras_vis-0.1.2-py3.5.egg/vis/visualization.py in visualize_saliency(model, layer_idx, filter_indices, seed_img, text, overlay)
    151     ]
    152     opt = Optimizer(model.input, losses)
--> 153     grads = opt.minimize(max_iter=1, verbose=False, jitter=0, seed_img=seed_img)[1]
    154 
    155     # We are minimizing loss as opposed to maximizing output as with the paper.

TypeError: minimize() got an unexpected keyword argument 'jitter'
eric-tramel commented 7 years ago

Additionally, if one removes the jitter keyword from l.153 you are then presented with a new error when "smoothing the activation map"...

Working on filters: [3]
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-7-4d1a18352923> in <module>()
      4 
      5 
----> 6 heatmap = visualize_saliency(faceNet, 7, yclas[idx], X[idx,:,:,:], text=names[yclas[idx]])

/Users/eric/anaconda/envs/tf35/lib/python3.5/site-packages/keras_vis-0.1.2-py3.5.egg/vis/visualization.py in visualize_saliency(model, layer_idx, filter_indices, seed_img, text, overlay)
    164     # Smoothen activation map
    165     grads = utils.deprocess_image(grads[0])
--> 166     grads /= np.max(grads)
    167 
    168     # Convert to heatmap and zero out low probabilities for a cleaner output.

TypeError: ufunc 'true_divide' output (typecode 'd') could not be coerced to provided output parameter (typecode 'B') according to the casting rule ''same_kind''

It seems this is coming from updates in NumPy. See NumPy issue #6464 here.

raghakot commented 7 years ago

Fixed in latest.

DylanCope commented 7 years ago

@eric-tramel, I had the same issue when removing the argument, I solved it by replacing the line grads /= np.max(grads) with grads = grads / np.max(grads). Also I installed the latest version of the package and this issue persists. On top of this I get an error for heatmap[np.where(grads <= 0.2)] = 0. I fixed this by changing the shape of heatmap, which for some reason was of rank 2.

I don't know why I would be having these issues and you wouldn't, @raghakot. I would guess you've got an out of date version of numpy.

raghakot commented 7 years ago

Hmm, the tests pass on travis. I dont think its a numpy versioning issue. Can you post full example of what you are trying to run. Is it in theano?

raghakot commented 7 years ago

Feel free to reopen if you are seeing issues. There is no point in closing issues if the fix doesnt work.

DylanCope commented 7 years ago

It doesn't seem right to reopen the issue, as the original issue in this thread was addressed. Also I did actually get the function working in the end, my final code looked like this,

    # Smoothen activation map
    grads = np.float32(utils.deprocess_image(grads[0]))
    grads /= np.max(grads)
    grads = cv2.cvtColor(grads, cv2.COLOR_GRAY2RGB)
    blurred = np.uint8(cv2.GaussianBlur(255*grads, (3, 3), 0))
    # Convert to heatmap and zero out low probabilities for a cleaner output.
    heatmap = cv2.applyColorMap(blurred, cv2.COLORMAP_JET)

    heatmap[np.where(grads <= 0.2)] = 0

I'm not sure if its actually giving the correct output, this is an example using the Xception model with ImageNet weights. heatmap It seems reasonable, however it doesn't looked as blurred as the examples and hasn't been thresholded in the same way.

raghakot commented 7 years ago

Try gradCAM. it usually gives better results.