raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.97k stars 664 forks source link

Is keras-vis requiring python2.7? Is python3.5 supported? #140

Closed jinchenglee closed 5 years ago

jinchenglee commented 5 years ago

I doubt maybe some issues are caused by the fact that I only use python3.5.

keisen commented 5 years ago

Hi, @jinchenglee .

keras-vis is supported both. We have been testing with python 2.7 and 3.6 on the travisci (but testcase might not be enough....) .

https://github.com/raghakot/keras-vis/blob/668b0e11dab93f3487f23c17e07f40554a8939e9/.travis.yml#L4-L15

Please tell us the detail of your problems, we might be able to resolve them.

jinchenglee commented 5 years ago

Thanks for prompt reply. I did see it installed into both 2.7 and 3.5 directories.

I noticed two issues when running visualize_attention.ipynb notebook:

1) It seems keras-vis assumes 'channels_last', I see error reported out as I used 'channels_first' in my .keras. This is easily fixed locally to np.transpose input image.

  1. I got error running visualize_saliency():
    
    from vis.visualization import visualize_saliency, overlay

titles = ['right steering', 'left steering', 'maintain steering'] modifiers = [None, 'negate', 'small_values'] for i, modifier in enumerate(modifiers): heatmap = visualize_saliency(model, layer_idx=-1, filter_indices=0, seed_input=bgr_img, grad_modifier=modifier) plt.figure() plt.title(titles[i])

Overlay is used to alpha blend heatmap onto img.

plt.imshow(overlay(img, heatmap, alpha=0.7))

I tried to change bgr_img to the channels_first version, still same error. 

InvalidArgumentError Traceback (most recent call last)

in () 5 for i, modifier in enumerate(modifiers): 6 heatmap = visualize_saliency(model, layer_idx=-1, filter_indices=0, ----> 7 seed_input=bgr_img, grad_modifier=modifier) 8 plt.figure() 9 plt.title(titles[i]) ~/.virtualenvs/cv/lib/python3.5/site-packages/vis/visualization/saliency.py in visualize_saliency(model, layer_idx, filter_indices, seed_input, backprop_modifier, grad_modifier) 123 (ActivationMaximization(model.layers[layer_idx], filter_indices), -1) 124 ] --> 125 return visualize_saliency_with_losses(model.input, losses, seed_input, grad_modifier) 126 127 ~/.virtualenvs/cv/lib/python3.5/site-packages/vis/visualization/saliency.py in visualize_saliency_with_losses(input_tensor, losses, seed_input, grad_modifier) 71 """ 72 opt = Optimizer(input_tensor, losses, norm_grads=False) ---> 73 grads = opt.minimize(seed_input=seed_input, max_iter=1, grad_modifier=grad_modifier, verbose=False)[1] 74 75 channel_idx = 1 if K.image_data_format() == 'channels_first' else -1 ~/.virtualenvs/cv/lib/python3.5/site-packages/vis/optimizer.py in minimize(self, seed_input, max_iter, input_modifiers, grad_modifier, callbacks, verbose) 141 142 # 0 learning phase for 'test' --> 143 computed_values = self.compute_fn([seed_input, 0]) 144 losses = computed_values[:len(self.loss_names)] 145 named_losses = zip(self.loss_names, losses) ~/.virtualenvs/cv/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs) 2664 return self._legacy_call(inputs) 2665 -> 2666 return self._call(inputs) 2667 else: 2668 if py_any(is_tensor(x) for x in inputs): ~/.virtualenvs/cv/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in _call(self, inputs) 2633 feed_symbols, 2634 symbol_vals, -> 2635 session) 2636 fetched = self._callable_fn(*array_vals) 2637 return fetched[:len(self.outputs)] ~/.virtualenvs/cv/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py in _make_callable(self, feed_arrays, feed_symbols, symbol_vals, session) 2585 callable_opts.target.append(self.updates_op.name) 2586 # Create callable. -> 2587 callable_fn = session._make_callable_from_options(callable_opts) 2588 # Cache parameters corresponding to the generated callable, so that 2589 # we can detect future mismatches and refresh the callable. ~/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/client/session.py in _make_callable_from_options(self, callable_options) 1478 """ 1479 self._extend_graph() -> 1480 return BaseSession._Callable(self, callable_options) 1481 1482 ~/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/client/session.py in __init__(self, session, callable_options) 1439 else: 1440 self._handle = tf_session.TF_DeprecatedSessionMakeCallable( -> 1441 session._session, options_ptr, status) 1442 finally: 1443 tf_session.TF_DeleteBuffer(options_ptr) ~/.virtualenvs/cv/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg) 517 None, None, 518 compat.as_text(c_api.TF_Message(self.status.status)), --> 519 c_api.TF_GetCode(self.status.status)) 520 # Delete the underlying status object from memory otherwise it stays alive 521 # as there is a reference to status from this from the traceback due to InvalidArgumentError: input_1:0 is both fed and fetched. ```
jinchenglee commented 5 years ago

116

I see this is related to Tensorflow 1.8.0 which I am using now. It is aprpeciated you can upgrade support TF to 1.8.0. Thanks.

keisen commented 5 years ago

@jinchenglee , Thank for your comment.

  1. It seems keras-vis assumes 'channels_last', I see error reported out as I used 'channels_first' in my .keras. This is easily fixed locally to np.transpose input image.

No, keras-vis supporte both the channels_last and channels_first. Please show us your error log if you could.

  1. I got error running visualize_saliency():

This problem was solved by PR #120 . Please install keras-vis via source code (i.e., setup.py ) or follow as:

pip install git+https://github.com/raghakot/keras-vis.git -U
jinchenglee commented 5 years ago

Solved the issue. Thanks.

keisen commented 5 years ago

I'm glad :smile: