farizrahman4u / recurrentshop

Framework for building complex recurrent neural networks with Keras
MIT License
766 stars 218 forks source link

Can I have a CNN inside an RNN using recurrentshop ? #5

Closed superhans closed 7 years ago

superhans commented 8 years ago

(Taken from https://www.tensorflow.org/versions/r0.11/tutorials/recurrent/index.html)

# Placeholder for the inputs in a given iteration.
words = tf.placeholder(tf.int32, [batch_size, num_steps])

lstm = rnn_cell.BasicLSTMCell(lstm_size)
# Initial state of the LSTM memory.
initial_state = state = tf.zeros([batch_size, lstm.state_size])

for i in range(num_steps):
    # The value of state is updated after processing each batch of words.
    output, state = lstm(words[:, i], state)

    # The rest of the code.
    # ...

final_state = state

The reason I ask is that using this style, we can take the ouput of a time step of an LSTM and feed it into the input of the next time step, like in this code :

  for step in range(num_iterations):
      with tf.device('/cpu:0'):
          patches = tf.image.extract_patches(images, tf.constant(patch_shape), inits+dx)
      patches = tf.reshape(patches, (batch_size * num_patches, patch_shape[0], patch_shape[1], num_channels))
      endpoints['patches'] = patches

      with tf.variable_scope('convnet', reuse=step>0):
          net = conv_model(patches)
          ims = net['concat']

      ims = tf.reshape(ims, (batch_size, -1))

      with tf.variable_scope('rnn', reuse=step>0) as scope:
          hidden_state = slim.ops.fc(tf.concat(1, [ims, hidden_state]), 512, activation=tf.tanh)
          prediction = slim.ops.fc(hidden_state, num_patches * 2, scope='pred', activation=None)
          endpoints['prediction'] = prediction
      prediction = tf.reshape(prediction, (batch_size, num_patches, 2))
      dx += prediction
dxs.append(dx)

(taken from https://github.com/trigeorgis/mdm/blob/master/mdm_model.py). Notice in this code that prediction, which is the output of the rnn at each timestep is added to dx, and dx is used earlier in this line patches = tf.image.extract_patches(images, tf.constant(patch_shape), inits+dx)

farizrahman4u commented 8 years ago

Yes. You can. All you have to do is write the appropriate step function.

superhans commented 8 years ago

I've been trying to solve this problem unsuccesfully. So I want to work on a toy version of the same problem. Let's not involve CNNs for the time being.

Assume the input data(X_tr) is of size (n_samples, timesteps, 1) and input labels(Y_tr) is np.argmax(X, axis=1). In other words, for each sample, I want to find out where the maximum value of that sample is located.

To simplify things, each sample of the data X_tr looks like a Gaussian with a random mean and standard deviation. For the example shown in the figure (where timesteps=100), the labels Y_tr would be [46 82 2]. In fact, since the dynamic range of Y is high, we can divide it by the timesteps=100, so Y_tr would be [0.46, 0.82, 0.02]

sequence

I want to locate the point of maximum using an RNN controller. The RNN should not get more than n_guess attempts to locate this maximum. So in the case of sample 1, where the maximum is at 84, if n_guess=5, the controller should go like : 0 -> 50 -> 75 -> 87 -> 84.

The critical idea here is that the RNN only gets 5 guesses ("glimpses") and it does not see all the timesteps of the data. At each glimpse, it estimates which timestep to move to next.

Whereas in all the keras based RNN architectures, the data has to go through all the timesteps (except for the ones you have masked out, but there doesn't seem to be a way to change the mask during the computation). But I want to be able to skip timesteps. Is such a thing possible ? I'm fairly certain it is possible in tensorflow and I'm working on that now. But if it worked in a keras based framework, that would be brilliant.

farizrahman4u commented 8 years ago

It is certainly possible. It would be great if you could provide pseudo algorithm for the step function. (Also mention the states and their shapes).

superhans commented 8 years ago

I am working on this. Will give it to you shortly.

superhans commented 7 years ago

Pseudocode is something like :

def GuessMaxOfGaussian(X_tr, num_iterations=5, batch_size=10, timesteps=100):
    # X_tr is the data of size (batch_size, timesteps, 1)

    initial_guess = timesteps // 2 # Guess that the max is found at 50, like binary search
    dx = zeros((batch_size, 1))

    # in the previous post, for one example, I said 50 -> 75 -> 87 -> 84. 
    # So dx would be 0 (initial), 25, 37, 34 for that sample 

    # rnns can be defined this way, see the Truncated Backpropagation in
    # https://www.tensorflow.org/versions/r0.11/tutorials/recurrent/index.html

    for step in range(num_iterations):
        sample = X_tr[:, initial_guess+dx]  # make a guess

        with tf.variable_scope('rnn', reuse=step>0) as scope:
            hidden_state = slim.fully_connected(patches, 1, activation_fn=tf.nn.tanh)        
            prediction = slim.fully_connected(hidden_state, 1) 
            # assume : prediction is a number between 0 and 1

        dx = dx + prediction*timesteps

    return inits+dx
superhans commented 7 years ago

Ohhh. I understand a bit better now. In tensorflow, it looks like RNNs are a way to enforce tied weights. In tensorflow, we can do things like this with the RNN :

for step in range(num_iterations):
     prediction = do_something(init+prediction)

So we could implement the same thing in Keras if we explicitly did :

prediction = do_something(init)
prediction1 = do_something(init+prediction)
prediction2 = do_something(init+prediction+prediction1)

and ensure that all the do_somethings (which could be any Sequential model) have tied weights. But I'm guessing this isn't possible.

farizrahman4u commented 7 years ago

Its very much possible. And pretty straight forward. You just have to write the appropriate RNNCell.