infocusp / tf_cnnvis

CNN visualization tool in TensorFlow
MIT License
780 stars 208 forks source link

Example not working: No Layer with layer name = conv1... #1

Closed shuang1330 closed 7 years ago

shuang1330 commented 7 years ago

Hi! I got this error when I tried to run the example.. It says:

No Layer with layer name = conv1 No Layer with layer name = conv2_1 No Layer with layer name = conv2_2 No Layer with layer name = conv3 No Layer with layer name = conv4_1 No Layer with layer name = conv4_2 No Layer with layer name = conv5_1 No Layer with layer name = conv5_2 Skipping. Too many featuremap. May cause memory errors. Skipping. Too many featuremap. May cause memory errors. No Layer with layer name = MaxPool No Layer with layer name = MaxPool_1 No Layer with layer name = MaxPool_2 No Layer with layer name = MaxPool_3 No Layer with layer name = MaxPool_4 Total Time = 39.663317

When I tried to use the command: tf_cnnvis.get_visualization(graph_or_path = tf.get_default_graph(), value_feed_dict = feed_dict, input_tensor=None, layers=['r','p','c]', path_logdir='./Log', path_outdir='./Output', force=False, n=8) I tried it in a simple model with only 2 conv layers, 1 max_pool and 2 fc. It doesn't generate any outputs/log files..

Thank you in advance for looking into the problem I'm having.

BhagyeshVikani commented 7 years ago

Thank you for reporting the issue. Please reinstall with sudo python setup.py install and check whether the bug is fixed.

And please share details about the second model you created so that we can debug it.

shuang1330 commented 7 years ago

Hi, thank you for helping me with that! I didn't use the setup.py , instead, I just copy the entire folder to run it... because when I used the setup.py file, I would install it to the system python path, but I'm using anaconda instead, and when I call python from the terminal, the annconda python will be used. I'm not sure why the setup.py install it into the other python path..

The model I'm using is copied from somewhere in github, can't find the source now.. But it's a pretty simple one:


from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import argparse
import sys

from tensorflow.examples.tutorials.mnist import input_data

import tensorflow as tf
from tensorflow.python.client import timeline

import tf_cnnvis.tf_cnnvis

FLAGS = None

def deepnn(x):
  x_image = tf.reshape(x, [-1, 28, 28, 1])

  W_conv1 = weight_variable([5, 5, 1, 32])
  b_conv1 = bias_variable([32])
  h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)

  h_pool1 = max_pool_2x2(h_conv1)

  W_conv2 = weight_variable([5, 5, 32, 64])
  b_conv2 = bias_variable([64])
  h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)

  h_pool2 = max_pool_2x2(h_conv2)

  W_fc1 = weight_variable([7 * 7 * 64, 1024])
  b_fc1 = bias_variable([1024])

  h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
  h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

  keep_prob = tf.placeholder(tf.float32)
  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

  W_fc2 = weight_variable([1024, 10])
  b_fc2 = bias_variable([10])

  y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
  return y_conv, keep_prob

def conv2d(x, W):
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                        strides=[1, 2, 2, 1], padding='SAME')

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)

if __name__ == '__main__':
  mnist = input_data.read_data_sets('MNIST_data/', one_hot=True)

  x = tf.placeholder(tf.float32, [None, 784])
  y_ = tf.placeholder(tf.float32, [None, 10])
  y_conv, keep_prob = deepnn(x)

  cross_entropy = tf.reduce_mean(
      tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
  train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
  correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
  accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

  run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
  run_metadata = tf.RunMetadata()

  with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    batch = mnist.train.next_batch(50)
    feed_dict = {x: batch[0], y_: batch[1], keep_prob: 0.5}
    sess.run((y_conv, keep_prob), feed_dict, options = run_options, run_metadata=run_metadata)
    tl = timeline.Timeline(run_metadata.step_stats)
    ctf = tl.generate_chrome_trace_format()
    with open('timeline.json', 'w') as f:
        f.write(ctf)

    tf_cnnvis.get_visualization(graph_or_path = tf.get_default_graph(), value_feed_dict = feed_dict, input_tensor=None, layers=['r','p','c]', path_logdir='./Log',
                                path_outdir='./Output', force=False, n=8)

`

javiribera commented 7 years ago

I can confirm these errors even when not using anaconda, but a virtualenv. Way to reproduce it:

  1. Clone the repo
  2. Create virtualenv
  3. source your new virtualenv
  4. pip install Pillow numpy scipy tensorflow h5py
  5. Install the package as in the readme
  6. Convert the demo ipynb with jupyter nbconvert --to script tf_cnnvis_Example1.ipynb
  7. Run
    (tf_cnnvis) ╭─javiribera@sonic  ~/Downloads/tf_cnnvis/examples  ‹master*› 
    ╰─$ py tf_cnnvis_Example1.py 
    100% [......................................................................] 243904576 / 2439045762
    Saved under ./alexnet_weights.h5
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
    No Layer with layer name = conv1
    No Layer with layer name = conv2_1
    No Layer with layer name = conv2_2
    No Layer with layer name = conv3
    No Layer with layer name = conv4_1
    No Layer with layer name = conv4_2
    No Layer with layer name = conv5_1
    No Layer with layer name = conv5_2
    Skipping. Too many featuremap. May cause memory errors.
    Skipping. Too many featuremap. May cause memory errors.
    No Layer with layer name = MaxPool
    No Layer with layer name = MaxPool_1
    No Layer with layer name = MaxPool_2
    No Layer with layer name = MaxPool_3
    No Layer with layer name = MaxPool_4
    No Layer with layer name = Conv2D
    No Layer with layer name = Conv2D_1
    No Layer with layer name = Conv2D_2
    No Layer with layer name = Conv2D_3
    No Layer with layer name = Conv2D_4
    No Layer with layer name = Conv2D_5
    No Layer with layer name = Conv2D_6
    No Layer with layer name = Conv2D_7
    Total Time = 37.623156

No file is created in the current directory.

javiribera commented 7 years ago

For some reason the error above is fixed after this pull request...: https://github.com/InFoCusp/tf_cnnvis/pull/2

(test) ╭─javiribera@sonic  ~/Downloads/tf_cnnvis/examples  ‹master*› 
╰─$ py tf_cnnvis_Example1.py 
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Reconstruction Completed for conv1 layer. Time taken = 3.557765 s
Reconstruction Completed for conv2_1 layer. Time taken = 10.330736 s
Reconstruction Completed for conv2_2 layer. Time taken = 11.265965 s
Reconstruction Completed for conv3 layer. Time taken = 38.331542 s
Reconstruction Completed for conv4_1 layer. Time taken = 19.666729 s
Reconstruction Completed for conv4_2 layer. Time taken = 20.610766 s
Reconstruction Completed for conv5_1 layer. Time taken = 10.944213 s
Reconstruction Completed for conv5_2 layer. Time taken = 10.348010 s
Skipping. Too many featuremap. May cause memory errors.
Skipping. Too many featuremap. May cause memory errors.
Reconstruction Completed for MaxPool layer. Time taken = 4.010168 s
Reconstruction Completed for MaxPool_1 layer. Time taken = 10.653442 s
Reconstruction Completed for MaxPool_2 layer. Time taken = 11.627388 s
Reconstruction Completed for MaxPool_3 layer. Time taken = 11.479884 s
Reconstruction Completed for MaxPool_4 layer. Time taken = 11.486936 s
Reconstruction Completed for Conv2D layer. Time taken = 4.330865 s
Reconstruction Completed for Conv2D_1 layer. Time taken = 13.503956 s
Reconstruction Completed for Conv2D_2 layer. Time taken = 12.696401 s
Reconstruction Completed for Conv2D_3 layer. Time taken = 46.248193 s
Reconstruction Completed for Conv2D_4 layer. Time taken = 23.508405 s
Reconstruction Completed for Conv2D_5 layer. Time taken = 23.400738 s
Reconstruction Completed for Conv2D_6 layer. Time taken = 16.414462 s
Reconstruction Completed for Conv2D_7 layer. Time taken = 17.022638 s
Total Time = 333.895082
BhagyeshVikani commented 7 years ago

Hi @shuang1330 Here's the slightly modified script you'll need to use- explanation added below.

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import sys
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
from tensorflow.python.client import timeline
import tf_cnnvis.tf_cnnvis

FLAGS = None

def deepnn(x):
  x_image = tf.reshape(x, [-1, 28, 28, 1])

  W_conv1 = weight_variable([5, 5, 1, 32])
  b_conv1 = bias_variable([32])
  h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)

  h_pool1 = max_pool_2x2(h_conv1)

  W_conv2 = weight_variable([5, 5, 32, 64])
  b_conv2 = bias_variable([64])
  h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)

  h_pool2 = max_pool_2x2(h_conv2)

  W_fc1 = weight_variable([7 * 7 * 64, 1024])
  b_fc1 = bias_variable([1024])

  h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
  h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

  keep_prob = tf.placeholder(tf.float32)
  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

  W_fc2 = weight_variable([1024, 10])
  b_fc2 = bias_variable([10])

  y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
  return x_image, y_conv, keep_prob

def conv2d(x, W):
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                        strides=[1, 2, 2, 1], padding='SAME')

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)

if __name__ == '__main__':
  mnist = input_data.read_data_sets('MNIST_data/', one_hot=True)

  x = tf.placeholder(tf.float32, [None, 784])
  y_ = tf.placeholder(tf.float32, [None, 10])
  x_image, y_conv, keep_prob = deepnn(x)

  cross_entropy = tf.reduce_mean(
      tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
  train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
  correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
  accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

  run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
  run_metadata = tf.RunMetadata()

  with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    batch = mnist.train.next_batch(50)
    feed_dict = {x: batch[0], y_: batch[1], keep_prob: 0.5}
    sess.run((y_conv, keep_prob), feed_dict, options = run_options, run_metadata=run_metadata)
    tl = timeline.Timeline(run_metadata.step_stats)
    ctf = tl.generate_chrome_trace_format()
    with open('timeline.json', 'w') as f:
        f.write(ctf)

    tf_cnnvis.get_visualization(graph_or_path = tf.get_default_graph(), value_feed_dict = feed_dict, input_tensor=x_image, layers=['r','p','c'], path_logdir='./Log', path_outdir='./Output', force=False, n=8)

The tf_cnnvis library is designed to compute the reconstructed image for input tensors which are 4D (samples x height x width x channels). The above network was taking in the images in a flattened format, i.e. (samples x (h*w*c)) at the first placeholder. These were being reshaped at a later point. The only change needed was to pass this reshaped tensor as the input tensor argument into get_visualization . That's been done in the above script and also thanks to the changes by @javiribera -- this works now. Thanks to @shuang1330, we have now also included this in our examples directory.

BhagyeshVikani commented 7 years ago

Please test and close the issue if the bug has been resolved.