Closed HarrisDePerceptron closed 4 years ago
I'm glad you're enjoying Lucid! (That first feature visualization is neat -- it looks vaguely like a channel objective of a high-low frequency detector.)
Thanks for the detailed debugging report. Could you try adding transforms=[]
when you call render_vis()
?
It looks to me like your strided convolutions have really strict shape requirements. Some of the transformations used in transformation robustness change the input shape, which can cause problems if you have strict shape requirements.
If that works, try something like:
render.render_vis(
...
transforms=[transform.pad(4, mode='constant', constant_value=.5), transform.jitter(4)]
)
Sorry for the delayed response considering yours was instant.
Thanks @colah it did resolve the issue. the default transformations were the cause in particular the pad
transformation. jitter works fine. haven't tested other transformations. thanks to your response i was able to generate visualizations for higher layers with dilation as well !!
Wonderful! So glad you have things working. :)
I am working with another model, and meet this problem as well. Can people please explain the arguments here? Why do we pad 4, with value .5, and jitter 4?
@misaka-10032 actually in my case padding was a problem for dilations so the model worked without padding therefore had to be overriden/ removed from default arguments. As for jitter its used for Transformation robustness. You can read more on this from the author's Original Blog or this Notebook
Thanks for replying. My model has dilation as well. I tried 4, 8, for certain layers, but no luck. I was wondering how this value is computed, so I can compute it myself.
I think I figured out a way. There is a function crop_or_pad_to()
. I should specify the input size, and append this transform to the standard transforms.
image = lucid_render.render_vis(
model, 'MobilenetV2/expanded_conv_14/project/Conv2D:0',
transforms=[
lucid_transform.pad(12, mode="constant", constant_value=.5),
lucid_transform.jitter(8),
lucid_transform.random_scale([1 + (i - 5) / 50. for i in range(11)]),
lucid_transform.random_rotate(list(range(-10, 11)) + 5 * [0]),
lucid_transform.jitter(4),
# Limit the input size.
lucid_transform.crop_or_pad_to(127, 127),
], verbose=False)
Lucid Version: 0.3.9 Tensorflow Version: 1.15.x Python: 3.7
Hi. Just tried out lucid and really loved it. Tested almost all visualization for models from models zoo. I decided to import my custom model for visualization. I manged to convert my keras model into a single graph.pb file by following the instructions here. My original model was in written in tensorflow 2 and converted to a frozengraph using this tutorial and loaded into tensorflow 1 using the "usual" graph parsing from a pb file. After loading the graph in tf1 i again saved it using command from the instructions:
The Original Model
I managed to generate visualizations upto layers without dilations (YAY!!!):
The Problem
As soon as i try to visualize layers with dilation and after i get some weird error:
InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: padded_shape[0]=37 is not divisible by block_shape[0]=2 [[node import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND (defined at /home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]] (1) Invalid argument: padded_shape[0]=37 is not divisible by block_shape[0]=2 [[node import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND (defined at /home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]] [[Mean/_29]] 0 successful operations. 0 derived errors ignored.
Sample conv5_1 node protobuf:
Final Graph node names: