tensorflow / lucid

A collection of infrastructure and tools for research in neural network interpretability.
Apache License 2.0
4.65k stars 655 forks source link

Dilation Layers not working (Custom Model) #257

Closed HarrisDePerceptron closed 4 years ago

HarrisDePerceptron commented 4 years ago

Lucid Version: 0.3.9 Tensorflow Version: 1.15.x Python: 3.7

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.4 LTS
Release:    18.04
Codename:   bionic

Hi. Just tried out lucid and really loved it. Tested almost all visualization for models from models zoo. I decided to import my custom model for visualization. I manged to convert my keras model into a single graph.pb file by following the instructions here. My original model was in written in tensorflow 2 and converted to a frozengraph using this tutorial and loaded into tensorflow 1 using the "usual" graph parsing from a pb file. After loading the graph in tf1 i again saved it using command from the instructions:

    Model.save(
      'final_output_pb_path',
      image_shape=[256,256, 1],
      input_name='input_name',
      output_names=['output_node_name'],
      image_value_range=[-50,50],
    )

The Original Model

input_tensor = Input(shape=[256, 256, 1])
x = Conv2D(64,3, padding="same", activation='relu', name='conv1_1')(input_tensor)
x = Conv2D(64,3, strides=[2,2], padding="same",activation='relu', name='conv1_2')(x)
x = BatchNormalization()(x)

x = Conv2D(128,3, padding="same", activation='relu', name='conv2_1')(x)
x = Conv2D(128,3, strides=[2,2], padding="same",activation='relu', name='conv2_2')(x)
x = BatchNormalization()(x)

x = Conv2D(256,3, padding="same", activation='relu', name='conv3_1')(x)
x = Conv2D(256,3, strides=[2,2], padding="same",activation='relu', name='conv3_2')(x)
x = BatchNormalization()(x)

x = Conv2D(512,3, padding="same", activation='relu', name='conv4_1')(x)
x = Conv2D(512,3,  padding="same",activation='relu', name='conv4_2')(x)
x = Conv2D(512,3,  padding="same",activation='relu', name='conv4_3')(x)
x = BatchNormalization()(x)

x = Conv2D(512,3, padding="same", activation='relu', name='conv5_1', dilation_rate=2)(x)
x = Conv2D(512,3,  padding="same",activation='relu', name='conv5_2', dilation_rate=2)(x)
x = Conv2D(512,3,  padding="same",activation='relu', name='conv5_3', dilation_rate=2)(x)
x = BatchNormalization()(x)

x = Conv2D(512,3, padding="same", activation='relu', name='conv6_1', dilation_rate=2)(x)
x = Conv2D(512,3,  padding="same",activation='relu', name='conv6_2', dilation_rate=2)(x)
x = Conv2D(512,3,  padding="same",activation='relu', name='conv6_3', dilation_rate=2)(x)
x = BatchNormalization()(x)

x = Conv2D(512,3, padding="same", activation='relu', name='conv7_1')(x)
x = Conv2D(512,3,  padding="same",activation='relu', name='conv7_2')(x)
x = Conv2D(512,3,  padding="same",activation='relu', name='conv7_3')(x)
x = BatchNormalization()(x)

x = UpSampling2D(size=[2,2])(x)

x = Conv2D(256,3, padding="same", activation='relu', name='conv8_1')(x)
x = Conv2D(265,3,  padding="same",activation='relu', name='conv8_2')(x)
x = Conv2D(256,3,  padding="same",activation='relu', name='conv8_3')(x)
x = BatchNormalization()(x)

I managed to generate visualizations upto layers without dilations (YAY!!!):

model = Model.load('final_output_pb_path')
param_f = lambda: param.color.to_valid_rgb(param.spatial.naive((1, 256,256,1)))
_ = render.render_vis(model, "import/functional_1/conv4_3/Relu:0", param_f=param_f)

feature-viz-colorization

The Problem

As soon as i try to visualize layers with dilation and after i get some weird error:

_ = render.render_vis(model, "import/functional_1/conv5_1/Relu:0", param_f=param_f)
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
   1364     try:
-> 1365       return fn(*args)
   1366     except errors.OpError as e:

~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
   1349       return self._call_tf_sessionrun(options, feed_dict, fetch_list,
-> 1350                                       target_list, run_metadata)
   1351 

~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
   1442                                             fetch_list, target_list,
-> 1443                                             run_metadata)
   1444 

InvalidArgumentError: 2 root error(s) found.
  (0) Invalid argument: padded_shape[0]=37 is not divisible by block_shape[0]=2
     [[{{node import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND}}]]
  (1) Invalid argument: padded_shape[0]=37 is not divisible by block_shape[0]=2
     [[{{node import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND}}]]
     [[Mean/_29]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-46-6d04f696ee88> in <module>
----> 1 _ = render.render_vis(model, "import/functional_1/conv5_1/Relu:0", param_f=param_f)

~/anaconda3/envs/lucid/lib/python3.7/site-packages/lucid/optvis/render.py in render_vis(model, objective_f, param_f, optimizer, transforms, thresholds, print_objectives, verbose, relu_gradient_override, use_fixed_seed)
    101     try:
    102       for i in range(max(thresholds)+1):
--> 103         loss_, _ = sess.run([loss, vis_op])
    104         if i in thresholds:
    105           vis = t_image.eval()

~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    954     try:
    955       result = self._run(None, fetches, feed_dict, options_ptr,
--> 956                          run_metadata_ptr)
    957       if run_metadata:
    958         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1178     if final_fetches or final_targets or (handle and feed_dict_tensor):
   1179       results = self._do_run(handle, final_targets, final_fetches,
-> 1180                              feed_dict_tensor, options, run_metadata)
   1181     else:
   1182       results = []

~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1357     if handle is None:
   1358       return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1359                            run_metadata)
   1360     else:
   1361       return self._do_call(_prun_fn, handle, feeds, fetches)

~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
   1382                     '\nsession_config.graph_options.rewrite_options.'
   1383                     'disable_meta_optimizer = True')
-> 1384       raise type(e)(node_def, op, message)
   1385 
   1386   def _extend_graph(self):

InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: padded_shape[0]=37 is not divisible by block_shape[0]=2 [[node import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND (defined at /home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]] (1) Invalid argument: padded_shape[0]=37 is not divisible by block_shape[0]=2 [[node import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND (defined at /home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]] [[Mean/_29]] 0 successful operations. 0 derived errors ignored.

Original stack trace for 'import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND':
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel_launcher.py", line 16, in <module>
    app.launch_new_instance()
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/traitlets/config/application.py", line 664, in launch_instance
    app.start()
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/kernelapp.py", line 612, in start
    self.io_loop.start()
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/platform/asyncio.py", line 149, in start
    self.asyncio_loop.run_forever()
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
    self._run_once()
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
    handle._run()
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/asyncio/events.py", line 88, in _run
    self._context.run(self._callback, *self._args)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/ioloop.py", line 690, in <lambda>
    lambda f: self._run_callback(functools.partial(callback, future))
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/ioloop.py", line 743, in _run_callback
    ret = callback()
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/gen.py", line 787, in inner
    self.run()
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/gen.py", line 748, in run
    yielded = self.gen.send(value)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 365, in process_one
    yield gen.maybe_future(dispatch(*args))
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
    yielded = next(result)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 268, in dispatch_shell
    yield gen.maybe_future(handler(stream, idents, msg))
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
    yielded = next(result)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 545, in execute_request
    user_expressions, allow_stdin,
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
    yielded = next(result)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/ipkernel.py", line 306, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2867, in run_cell
    raw_cell, store_history, silent, shell_futures)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2895, in _run_cell
    return runner(coro)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner
    coro.send(None)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3072, in run_cell_async
    interactivity=interactivity, compiler=compiler, result=result)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3263, in run_ast_nodes
    if (await self.run_code(code, result,  async_=asy)):
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3343, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-46-6d04f696ee88>", line 1, in <module>
    _ = render.render_vis(model, "import/functional_1/conv5_1/Relu:0", param_f=param_f)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/lucid/optvis/render.py", line 95, in render_vis
    relu_gradient_override)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/lucid/optvis/render.py", line 177, in make_vis_T
    T = import_model(model, transform_f(t_image), t_image)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/lucid/optvis/render.py", line 257, in import_model
    T_ = model.import_graph(t_image, scope=scope, forget_xy_shape=True, input_map=input_map)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/lucid/modelzoo/vision_base.py", line 201, in import_graph
    self.graph_def, final_input_map, name=scope)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/importer.py", line 405, in import_graph_def
    producer_op_list=producer_op_list)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/importer.py", line 517, in _import_graph_def_internal
    _ProcessNewOps(graph)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/importer.py", line 243, in _ProcessNewOps
    for new_op in graph._add_new_tf_operations(compute_devices=False):  # pylint: disable=protected-access
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3561, in _add_new_tf_operations
    for c_op in c_api_util.new_tf_operations(self)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3561, in <listcomp>
    for c_op in c_api_util.new_tf_operations(self)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3451, in _create_op_from_tf_operation
    ret = Operation(c_op, self)
  File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
    self._traceback = tf_stack.extract_stack()

Sample conv5_1 node protobuf:

name: "functional_1/conv5_1/Conv2D"
op: "Conv2D"
input: "functional_1/conv5_1/Conv2D/SpaceToBatchND"
input: "functional_1/conv5_1/Conv2D/ReadVariableOp"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "data_format"
  value {
    s: "NHWC"
  }
}
attr {
  key: "dilations"
  value {
    list {
      i: 1
      i: 2
      i: 2
      i: 1
    }
  }
}
attr {
  key: "padding"
  value {
    s: "SAME"
  }
}
attr {
  key: "strides"
  value {
    list {
      i: 1
      i: 1
      i: 1
      i: 1
    }
  }
}
attr {
  key: "use_cudnn_on_gpu"
  value {
    b: true
  }
}

Final Graph node names:

import/x
import/functional_1/conv1_1/Conv2D/ReadVariableOp/resource
import/functional_1/conv1_1/Conv2D/ReadVariableOp
import/functional_1/conv1_1/Conv2D
import/functional_1/conv1_1/BiasAdd/ReadVariableOp/resource
import/functional_1/conv1_1/BiasAdd/ReadVariableOp
import/functional_1/conv1_1/BiasAdd
import/functional_1/conv1_1/Relu
import/functional_1/conv1_2/Conv2D/ReadVariableOp/resource
import/functional_1/conv1_2/Conv2D/ReadVariableOp
import/functional_1/conv1_2/Conv2D
import/functional_1/conv1_2/BiasAdd/ReadVariableOp/resource
import/functional_1/conv1_2/BiasAdd/ReadVariableOp
import/functional_1/conv1_2/BiasAdd
import/functional_1/conv1_2/Relu
import/functional_1/batch_normalization/ReadVariableOp/resource
import/functional_1/batch_normalization/ReadVariableOp
import/functional_1/batch_normalization/ReadVariableOp_1/resource
import/functional_1/batch_normalization/ReadVariableOp_1
import/functional_1/batch_normalization/FusedBatchNormV3/ReadVariableOp/resource
import/functional_1/batch_normalization/FusedBatchNormV3/ReadVariableOp
import/functional_1/batch_normalization/FusedBatchNormV3/ReadVariableOp_1/resource
import/functional_1/batch_normalization/FusedBatchNormV3/ReadVariableOp_1
import/functional_1/batch_normalization/FusedBatchNormV3
import/functional_1/conv2_1/Conv2D/ReadVariableOp/resource
import/functional_1/conv2_1/Conv2D/ReadVariableOp
import/functional_1/conv2_1/Conv2D
import/functional_1/conv2_1/BiasAdd/ReadVariableOp/resource
import/functional_1/conv2_1/BiasAdd/ReadVariableOp
import/functional_1/conv2_1/BiasAdd
import/functional_1/conv2_1/Relu
import/functional_1/conv2_2/Conv2D/ReadVariableOp/resource
import/functional_1/conv2_2/Conv2D/ReadVariableOp
import/functional_1/conv2_2/Conv2D
import/functional_1/conv2_2/BiasAdd/ReadVariableOp/resource
import/functional_1/conv2_2/BiasAdd/ReadVariableOp
import/functional_1/conv2_2/BiasAdd
import/functional_1/conv2_2/Relu
import/functional_1/batch_normalization_1/ReadVariableOp/resource
import/functional_1/batch_normalization_1/ReadVariableOp
import/functional_1/batch_normalization_1/ReadVariableOp_1/resource
import/functional_1/batch_normalization_1/ReadVariableOp_1
import/functional_1/batch_normalization_1/FusedBatchNormV3/ReadVariableOp/resource
import/functional_1/batch_normalization_1/FusedBatchNormV3/ReadVariableOp
import/functional_1/batch_normalization_1/FusedBatchNormV3/ReadVariableOp_1/resource
import/functional_1/batch_normalization_1/FusedBatchNormV3/ReadVariableOp_1
import/functional_1/batch_normalization_1/FusedBatchNormV3
import/functional_1/conv3_1/Conv2D/ReadVariableOp/resource
import/functional_1/conv3_1/Conv2D/ReadVariableOp
import/functional_1/conv3_1/Conv2D
import/functional_1/conv3_1/BiasAdd/ReadVariableOp/resource
import/functional_1/conv3_1/BiasAdd/ReadVariableOp
import/functional_1/conv3_1/BiasAdd
import/functional_1/conv3_1/Relu
import/functional_1/conv3_2/Conv2D/ReadVariableOp/resource
import/functional_1/conv3_2/Conv2D/ReadVariableOp
import/functional_1/conv3_2/Conv2D
import/functional_1/conv3_2/BiasAdd/ReadVariableOp/resource
import/functional_1/conv3_2/BiasAdd/ReadVariableOp
import/functional_1/conv3_2/BiasAdd
import/functional_1/conv3_2/Relu
import/functional_1/batch_normalization_2/ReadVariableOp/resource
import/functional_1/batch_normalization_2/ReadVariableOp
import/functional_1/batch_normalization_2/ReadVariableOp_1/resource
import/functional_1/batch_normalization_2/ReadVariableOp_1
import/functional_1/batch_normalization_2/FusedBatchNormV3/ReadVariableOp/resource
import/functional_1/batch_normalization_2/FusedBatchNormV3/ReadVariableOp
import/functional_1/batch_normalization_2/FusedBatchNormV3/ReadVariableOp_1/resource
import/functional_1/batch_normalization_2/FusedBatchNormV3/ReadVariableOp_1
import/functional_1/batch_normalization_2/FusedBatchNormV3
import/functional_1/conv4_1/Conv2D/ReadVariableOp/resource
import/functional_1/conv4_1/Conv2D/ReadVariableOp
import/functional_1/conv4_1/Conv2D
import/functional_1/conv4_1/BiasAdd/ReadVariableOp/resource
import/functional_1/conv4_1/BiasAdd/ReadVariableOp
import/functional_1/conv4_1/BiasAdd
import/functional_1/conv4_1/Relu
import/functional_1/conv4_2/Conv2D/ReadVariableOp/resource
import/functional_1/conv4_2/Conv2D/ReadVariableOp
import/functional_1/conv4_2/Conv2D
import/functional_1/conv4_2/BiasAdd/ReadVariableOp/resource
import/functional_1/conv4_2/BiasAdd/ReadVariableOp
import/functional_1/conv4_2/BiasAdd
import/functional_1/conv4_2/Relu
import/functional_1/conv4_3/Conv2D/ReadVariableOp/resource
import/functional_1/conv4_3/Conv2D/ReadVariableOp
import/functional_1/conv4_3/Conv2D
import/functional_1/conv4_3/BiasAdd/ReadVariableOp/resource
import/functional_1/conv4_3/BiasAdd/ReadVariableOp
import/functional_1/conv4_3/BiasAdd
import/functional_1/conv4_3/Relu
import/functional_1/batch_normalization_3/ReadVariableOp/resource
import/functional_1/batch_normalization_3/ReadVariableOp
import/functional_1/batch_normalization_3/ReadVariableOp_1/resource
import/functional_1/batch_normalization_3/ReadVariableOp_1
import/functional_1/batch_normalization_3/FusedBatchNormV3/ReadVariableOp/resource
import/functional_1/batch_normalization_3/FusedBatchNormV3/ReadVariableOp
import/functional_1/batch_normalization_3/FusedBatchNormV3/ReadVariableOp_1/resource
import/functional_1/batch_normalization_3/FusedBatchNormV3/ReadVariableOp_1
import/functional_1/batch_normalization_3/FusedBatchNormV3
import/functional_1/conv5_1/Conv2D/SpaceToBatchND/block_shape
import/functional_1/conv5_1/Conv2D/SpaceToBatchND/paddings
import/functional_1/conv5_1/Conv2D/SpaceToBatchND
import/functional_1/conv5_1/Conv2D/ReadVariableOp/resource
import/functional_1/conv5_1/Conv2D/ReadVariableOp
import/functional_1/conv5_1/Conv2D
import/functional_1/conv5_1/Conv2D/BatchToSpaceND/block_shape
import/functional_1/conv5_1/Conv2D/BatchToSpaceND/crops
import/functional_1/conv5_1/Conv2D/BatchToSpaceND
import/functional_1/conv5_1/BiasAdd/ReadVariableOp/resource
import/functional_1/conv5_1/BiasAdd/ReadVariableOp
import/functional_1/conv5_1/BiasAdd
import/functional_1/conv5_1/Relu
import/functional_1/conv5_2/Conv2D/SpaceToBatchND/block_shape
import/functional_1/conv5_2/Conv2D/SpaceToBatchND/paddings
import/functional_1/conv5_2/Conv2D/SpaceToBatchND
import/functional_1/conv5_2/Conv2D/ReadVariableOp/resource
import/functional_1/conv5_2/Conv2D/ReadVariableOp
import/functional_1/conv5_2/Conv2D
import/functional_1/conv5_2/Conv2D/BatchToSpaceND/block_shape
import/functional_1/conv5_2/Conv2D/BatchToSpaceND/crops
import/functional_1/conv5_2/Conv2D/BatchToSpaceND
import/functional_1/conv5_2/BiasAdd/ReadVariableOp/resource
import/functional_1/conv5_2/BiasAdd/ReadVariableOp
import/functional_1/conv5_2/BiasAdd
import/functional_1/conv5_2/Relu
import/functional_1/conv5_3/Conv2D/SpaceToBatchND/block_shape
import/functional_1/conv5_3/Conv2D/SpaceToBatchND/paddings
import/functional_1/conv5_3/Conv2D/SpaceToBatchND
import/functional_1/conv5_3/Conv2D/ReadVariableOp/resource
import/functional_1/conv5_3/Conv2D/ReadVariableOp
import/functional_1/conv5_3/Conv2D
import/functional_1/conv5_3/Conv2D/BatchToSpaceND/block_shape
import/functional_1/conv5_3/Conv2D/BatchToSpaceND/crops
import/functional_1/conv5_3/Conv2D/BatchToSpaceND
import/functional_1/conv5_3/BiasAdd/ReadVariableOp/resource
import/functional_1/conv5_3/BiasAdd/ReadVariableOp
import/functional_1/conv5_3/BiasAdd
import/functional_1/conv5_3/Relu
import/functional_1/batch_normalization_4/ReadVariableOp/resource
import/functional_1/batch_normalization_4/ReadVariableOp
import/functional_1/batch_normalization_4/ReadVariableOp_1/resource
import/functional_1/batch_normalization_4/ReadVariableOp_1
import/functional_1/batch_normalization_4/FusedBatchNormV3/ReadVariableOp/resource
import/functional_1/batch_normalization_4/FusedBatchNormV3/ReadVariableOp
import/functional_1/batch_normalization_4/FusedBatchNormV3/ReadVariableOp_1/resource
import/functional_1/batch_normalization_4/FusedBatchNormV3/ReadVariableOp_1
import/functional_1/batch_normalization_4/FusedBatchNormV3
import/functional_1/conv6_1/Conv2D/SpaceToBatchND/block_shape
import/functional_1/conv6_1/Conv2D/SpaceToBatchND/paddings
import/functional_1/conv6_1/Conv2D/SpaceToBatchND
import/functional_1/conv6_1/Conv2D/ReadVariableOp/resource
import/functional_1/conv6_1/Conv2D/ReadVariableOp
import/functional_1/conv6_1/Conv2D
import/functional_1/conv6_1/Conv2D/BatchToSpaceND/block_shape
import/functional_1/conv6_1/Conv2D/BatchToSpaceND/crops
import/functional_1/conv6_1/Conv2D/BatchToSpaceND
import/functional_1/conv6_1/BiasAdd/ReadVariableOp/resource
import/functional_1/conv6_1/BiasAdd/ReadVariableOp
import/functional_1/conv6_1/BiasAdd
import/functional_1/conv6_1/Relu
import/functional_1/conv6_2/Conv2D/SpaceToBatchND/block_shape
import/functional_1/conv6_2/Conv2D/SpaceToBatchND/paddings
import/functional_1/conv6_2/Conv2D/SpaceToBatchND
import/functional_1/conv6_2/Conv2D/ReadVariableOp/resource
import/functional_1/conv6_2/Conv2D/ReadVariableOp
import/functional_1/conv6_2/Conv2D
import/functional_1/conv6_2/Conv2D/BatchToSpaceND/block_shape
import/functional_1/conv6_2/Conv2D/BatchToSpaceND/crops
import/functional_1/conv6_2/Conv2D/BatchToSpaceND
import/functional_1/conv6_2/BiasAdd/ReadVariableOp/resource
import/functional_1/conv6_2/BiasAdd/ReadVariableOp
import/functional_1/conv6_2/BiasAdd
import/functional_1/conv6_2/Relu
import/functional_1/conv6_3/Conv2D/SpaceToBatchND/block_shape
import/functional_1/conv6_3/Conv2D/SpaceToBatchND/paddings
import/functional_1/conv6_3/Conv2D/SpaceToBatchND
import/functional_1/conv6_3/Conv2D/ReadVariableOp/resource
import/functional_1/conv6_3/Conv2D/ReadVariableOp
import/functional_1/conv6_3/Conv2D
import/functional_1/conv6_3/Conv2D/BatchToSpaceND/block_shape
import/functional_1/conv6_3/Conv2D/BatchToSpaceND/crops
import/functional_1/conv6_3/Conv2D/BatchToSpaceND
import/functional_1/conv6_3/BiasAdd/ReadVariableOp/resource
import/functional_1/conv6_3/BiasAdd/ReadVariableOp
import/functional_1/conv6_3/BiasAdd
import/functional_1/conv6_3/Relu
import/functional_1/batch_normalization_5/ReadVariableOp/resource
import/functional_1/batch_normalization_5/ReadVariableOp
import/functional_1/batch_normalization_5/ReadVariableOp_1/resource
import/functional_1/batch_normalization_5/ReadVariableOp_1
import/functional_1/batch_normalization_5/FusedBatchNormV3/ReadVariableOp/resource
import/functional_1/batch_normalization_5/FusedBatchNormV3/ReadVariableOp
import/functional_1/batch_normalization_5/FusedBatchNormV3/ReadVariableOp_1/resource
import/functional_1/batch_normalization_5/FusedBatchNormV3/ReadVariableOp_1
import/functional_1/batch_normalization_5/FusedBatchNormV3
import/functional_1/conv7_1/Conv2D/ReadVariableOp/resource
import/functional_1/conv7_1/Conv2D/ReadVariableOp
import/functional_1/conv7_1/Conv2D
import/functional_1/conv7_1/BiasAdd/ReadVariableOp/resource
import/functional_1/conv7_1/BiasAdd/ReadVariableOp
import/functional_1/conv7_1/BiasAdd
import/functional_1/conv7_1/Relu
import/functional_1/conv7_2/Conv2D/ReadVariableOp/resource
import/functional_1/conv7_2/Conv2D/ReadVariableOp
import/functional_1/conv7_2/Conv2D
import/functional_1/conv7_2/BiasAdd/ReadVariableOp/resource
import/functional_1/conv7_2/BiasAdd/ReadVariableOp
import/functional_1/conv7_2/BiasAdd
import/functional_1/conv7_2/Relu
import/functional_1/conv7_3/Conv2D/ReadVariableOp/resource
import/functional_1/conv7_3/Conv2D/ReadVariableOp
import/functional_1/conv7_3/Conv2D
import/functional_1/conv7_3/BiasAdd/ReadVariableOp/resource
import/functional_1/conv7_3/BiasAdd/ReadVariableOp
import/functional_1/conv7_3/BiasAdd
import/functional_1/conv7_3/Relu
import/functional_1/batch_normalization_6/ReadVariableOp/resource
import/functional_1/batch_normalization_6/ReadVariableOp
import/functional_1/batch_normalization_6/ReadVariableOp_1/resource
import/functional_1/batch_normalization_6/ReadVariableOp_1
import/functional_1/batch_normalization_6/FusedBatchNormV3/ReadVariableOp/resource
import/functional_1/batch_normalization_6/FusedBatchNormV3/ReadVariableOp
import/functional_1/batch_normalization_6/FusedBatchNormV3/ReadVariableOp_1/resource
import/functional_1/batch_normalization_6/FusedBatchNormV3/ReadVariableOp_1
import/functional_1/batch_normalization_6/FusedBatchNormV3
import/functional_1/up_sampling2d/Shape
import/functional_1/up_sampling2d/strided_slice/stack
import/functional_1/up_sampling2d/strided_slice/stack_1
import/functional_1/up_sampling2d/strided_slice/stack_2
import/functional_1/up_sampling2d/strided_slice
import/functional_1/up_sampling2d/Const
import/functional_1/up_sampling2d/mul
import/functional_1/up_sampling2d/resize/ResizeNearestNeighbor
import/functional_1/conv8_1/Conv2D/ReadVariableOp/resource
import/functional_1/conv8_1/Conv2D/ReadVariableOp
import/functional_1/conv8_1/Conv2D
import/functional_1/conv8_1/BiasAdd/ReadVariableOp/resource
import/functional_1/conv8_1/BiasAdd/ReadVariableOp
import/functional_1/conv8_1/BiasAdd
import/functional_1/conv8_1/Relu
import/functional_1/conv8_2/Conv2D/ReadVariableOp/resource
import/functional_1/conv8_2/Conv2D/ReadVariableOp
import/functional_1/conv8_2/Conv2D
import/functional_1/conv8_2/BiasAdd/ReadVariableOp/resource
import/functional_1/conv8_2/BiasAdd/ReadVariableOp
import/functional_1/conv8_2/BiasAdd
import/functional_1/conv8_2/Relu
import/functional_1/conv8_3/Conv2D/ReadVariableOp/resource
import/functional_1/conv8_3/Conv2D/ReadVariableOp
import/functional_1/conv8_3/Conv2D
import/functional_1/conv8_3/BiasAdd/ReadVariableOp/resource
import/functional_1/conv8_3/BiasAdd/ReadVariableOp
import/functional_1/conv8_3/BiasAdd
import/functional_1/conv8_3/Relu
import/functional_1/batch_normalization_7/ReadVariableOp/resource
import/functional_1/batch_normalization_7/ReadVariableOp
import/functional_1/batch_normalization_7/ReadVariableOp_1/resource
import/functional_1/batch_normalization_7/ReadVariableOp_1
import/functional_1/batch_normalization_7/FusedBatchNormV3/ReadVariableOp/resource
import/functional_1/batch_normalization_7/FusedBatchNormV3/ReadVariableOp
import/functional_1/batch_normalization_7/FusedBatchNormV3/ReadVariableOp_1/resource
import/functional_1/batch_normalization_7/FusedBatchNormV3/ReadVariableOp_1
import/functional_1/batch_normalization_7/FusedBatchNormV3
import/functional_1/up_sampling2d_1/Shape
import/functional_1/up_sampling2d_1/strided_slice/stack
import/functional_1/up_sampling2d_1/strided_slice/stack_1
import/functional_1/up_sampling2d_1/strided_slice/stack_2
import/functional_1/up_sampling2d_1/strided_slice
import/functional_1/up_sampling2d_1/Const
import/functional_1/up_sampling2d_1/mul
import/functional_1/up_sampling2d_1/resize/ResizeNearestNeighbor
import/functional_1/conv_ab/Conv2D/ReadVariableOp/resource
import/functional_1/conv_ab/Conv2D/ReadVariableOp
import/functional_1/conv_ab/Conv2D
import/functional_1/conv_ab/BiasAdd/ReadVariableOp/resource
import/functional_1/conv_ab/BiasAdd/ReadVariableOp
import/functional_1/conv_ab/BiasAdd
import/Identity
colah commented 4 years ago

I'm glad you're enjoying Lucid! (That first feature visualization is neat -- it looks vaguely like a channel objective of a high-low frequency detector.)

Thanks for the detailed debugging report. Could you try adding transforms=[] when you call render_vis()?

It looks to me like your strided convolutions have really strict shape requirements. Some of the transformations used in transformation robustness change the input shape, which can cause problems if you have strict shape requirements.

colah commented 4 years ago

If that works, try something like:

render.render_vis(
  ...
  transforms=[transform.pad(4, mode='constant', constant_value=.5), transform.jitter(4)]
)
HarrisDePerceptron commented 4 years ago

Sorry for the delayed response considering yours was instant. Thanks @colah it did resolve the issue. the default transformations were the cause in particular the pad transformation. jitter works fine. haven't tested other transformations. thanks to your response i was able to generate visualizations for higher layers with dilation as well !!

colah commented 4 years ago

Wonderful! So glad you have things working. :)

misaka-10032 commented 4 years ago

I am working with another model, and meet this problem as well. Can people please explain the arguments here? Why do we pad 4, with value .5, and jitter 4?

HarrisDePerceptron commented 4 years ago

@misaka-10032 actually in my case padding was a problem for dilations so the model worked without padding therefore had to be overriden/ removed from default arguments. As for jitter its used for Transformation robustness. You can read more on this from the author's Original Blog or this Notebook

misaka-10032 commented 4 years ago

Thanks for replying. My model has dilation as well. I tried 4, 8, for certain layers, but no luck. I was wondering how this value is computed, so I can compute it myself.

misaka-10032 commented 4 years ago

I think I figured out a way. There is a function crop_or_pad_to(). I should specify the input size, and append this transform to the standard transforms.

image = lucid_render.render_vis(
      model, 'MobilenetV2/expanded_conv_14/project/Conv2D:0',
      transforms=[
          lucid_transform.pad(12, mode="constant", constant_value=.5),
          lucid_transform.jitter(8),
          lucid_transform.random_scale([1 + (i - 5) / 50. for i in range(11)]),
          lucid_transform.random_rotate(list(range(-10, 11)) + 5 * [0]),
          lucid_transform.jitter(4),
          # Limit the input size.
          lucid_transform.crop_or_pad_to(127, 127),
      ], verbose=False)