nfmcclure / tensorflow_cookbook

Code for Tensorflow Machine Learning Cookbook
https://www.packtpub.com/big-data-and-business-intelligence/tensorflow-machine-learning-cookbook-second-edition
MIT License
6.24k stars 2.41k forks source link

Chap06 05update - created notebook & parameterized running #61

Closed jimthompson5802 closed 7 years ago

jimthompson5802 commented 7 years ago

@nfmcclure I want to point out that the following changes were made to the sample program.

First, is there a seed you want to use to be able to reproduce the results from the book?

Second, I created parameters for the 1D and 2D examples. I figure this will make it easier for the reader to experiment with different size tensors. Here are examples of the changes:

Here are some code fragments for the 1D example

# parameters for the run
data_size = 25
conv_size = 5
maxpool_size = 5
stride_size = 1
#--------Convolution--------
def conv_layer_1d(input_1d, my_filter,stride):
    # TensorFlow's 'conv2d()' function only works with 4D arrays:
    # [batch#, width, height, channels], we have 1 batch, and
    # width = 1, but height = the length of the input, and 1 channel.
    # So next we create the 4D array by inserting dimension 1's.
    input_2d = tf.expand_dims(input_1d, 0)
    input_3d = tf.expand_dims(input_2d, 0)
    input_4d = tf.expand_dims(input_3d, 3)
    # Perform convolution with stride = 1, if we wanted to increase the stride,
    # to say '2', then strides=[1,1,2,1]
    convolution_output = tf.nn.conv2d(input_4d, filter=my_filter, strides=[1,1,stride,1], padding="VALID")
    # Get rid of extra dimensions
    conv_output_1d = tf.squeeze(convolution_output)
    return(conv_output_1d)

# Create filter for convolution.
my_filter = tf.Variable(tf.random_normal(shape=[1,conv_size,1,1]))
# Create convolution layer
my_convolution_output = conv_layer_1d(x_input_1d, my_filter,stride=stride_size)

The output should automatically change to reflect the parameters settings:

# Convolution Output
print('Input = array of length %d' % (x_input_1d.shape.as_list()[0]))
print('Convolution w/ filter, length = %d, stride size = %d, results in an array of length %d:' % 
      (conv_size,stride_size,my_convolution_output.shape.as_list()[0]))
print(sess.run(my_convolution_output, feed_dict=feed_dict))

# Activation Output
print('\nInput = above array of length %d' % (my_convolution_output.shape.as_list()[0]))
print('ReLU element wise returns an array of length %d:' % (my_activation_output.shape.as_list()[0]))
print(sess.run(my_activation_output, feed_dict=feed_dict))
>>>> 1D Data <<<<
Input = array of length 25
Convolution w/ filter, length = 5, stride size = 1, results in an array of length 21:
[-2.63576341 -1.11550486 -0.95571411 -1.69670296 -0.35699379  0.62266493
  4.43316031  2.01364899  1.33044648 -2.30629659 -0.82916248 -2.63594174
  0.76669347 -2.46465087 -2.2855041   1.49780679  1.6960566   1.48557389
 -2.79799461  1.18149185  1.42146575]

Input = above array of length 21
ReLU element wise returns an array of length 21:
[ 0.          0.          0.          0.          0.          0.62266493
  4.43316031  2.01364899  1.33044648  0.          0.          0.
  0.76669347  0.          0.          1.49780679  1.6960566   1.48557389
  0.          1.18149185  1.42146575]

Here are some code fragments for 2D example

# parameters for the run
row_size = 10
col_size = 10
conv_size = 2
conv_stride_size = 2
maxpool_size = 2
maxpool_stride_size = 1
# Convolution
def conv_layer_2d(input_2d, my_filter,stride_size):
    # TensorFlow's 'conv2d()' function only works with 4D arrays:
    # [batch#, width, height, channels], we have 1 batch, and
    # 1 channel, but we do have width AND height this time.
    # So next we create the 4D array by inserting dimension 1's.
    input_3d = tf.expand_dims(input_2d, 0)
    input_4d = tf.expand_dims(input_3d, 3)
    # Note the stride difference below!
    convolution_output = tf.nn.conv2d(input_4d, filter=my_filter, 
                                      strides=[1,stride_size,stride_size,1], padding="VALID")
    # Get rid of unnecessary dimensions
    conv_output_2d = tf.squeeze(convolution_output)
    return(conv_output_2d)

# Create Convolutional Filter
my_filter = tf.Variable(tf.random_normal(shape=[conv_size,conv_size,1,1]))
# Create Convolutional Layer
my_convolution_output = conv_layer_2d(x_input_2d, my_filter,stride_size=conv_stride_size)

Like the 1D example, output for the 2D example should automatically adjust.

# Convolution Output
print('Input = %s array' % (x_input_2d.shape.as_list()))
print('%s Convolution, stride size = [%d, %d] , results in the %s array' % 
      (my_filter.get_shape().as_list()[:2],conv_stride_size,conv_stride_size,my_convolution_output.shape.as_list()))
print(sess.run(my_convolution_output, feed_dict=feed_dict))

# Activation Output
print('\nInput = the above %s array' % (my_convolution_output.shape.as_list()))
print('ReLU element wise returns the %s array' % (my_activation_output.shape.as_list()))
print(sess.run(my_activation_output, feed_dict=feed_dict))
>>>> 2D Data <<<<
Input = [10, 10] array
[2, 2] Convolution, stride size = [2, 2] , results in the [5, 5] array
[[ 0.14431179  0.72783369  1.51149166 -1.28099763  1.78439188]
 [-2.54503059  0.76156765 -0.51650006  0.77131093  0.37542343]
 [ 0.49345911  0.01592223  0.38653135 -1.47997665  0.6952765 ]
 [-0.34617192 -2.53189754 -0.9525758  -1.4357065   0.66257358]
 [-1.98540258  0.34398788  2.53760481 -0.86784822 -0.3100495 ]]

Input = the above [5, 5] array
ReLU element wise returns the [5, 5] array
[[ 0.14431179  0.72783369  1.51149166  0.          1.78439188]
 [ 0.          0.76156765  0.          0.77131093  0.37542343]
 [ 0.49345911  0.01592223  0.38653135  0.          0.6952765 ]
 [ 0.          0.          0.          0.          0.66257358]
 [ 0.          0.34398788  2.53760481  0.          0.        ]]
nfmcclure commented 7 years ago

Wow! This is awesome. Thanks for these examples. I think they fit very well.