Closed superhans closed 7 years ago
Yes. You can. All you have to do is write the appropriate step function.
I've been trying to solve this problem unsuccesfully. So I want to work on a toy version of the same problem. Let's not involve CNNs for the time being.
Assume the input data(X_tr
) is of size (n_samples, timesteps, 1)
and input labels(Y_tr
) is np.argmax(X, axis=1)
. In other words, for each sample, I want to find out where the maximum value of that sample is located.
To simplify things, each sample of the data X_tr
looks like a Gaussian with a random mean and standard deviation. For the example shown in the figure (where timesteps=100), the labels Y_tr
would be [46 82 2]
. In fact, since the dynamic range of Y is high, we can divide it by the timesteps=100
, so Y_tr
would be [0.46, 0.82, 0.02]
I want to locate the point of maximum using an RNN controller. The RNN should not get more than n_guess
attempts to locate this maximum. So in the case of sample 1, where the maximum is at 84, if n_guess=5
, the controller should go like : 0 -> 50 -> 75 -> 87 -> 84.
The critical idea here is that the RNN only gets 5 guesses ("glimpses") and it does not see all the timesteps of the data. At each glimpse, it estimates which timestep to move to next.
Whereas in all the keras based RNN architectures, the data has to go through all the timesteps (except for the ones you have masked out, but there doesn't seem to be a way to change the mask during the computation). But I want to be able to skip timesteps. Is such a thing possible ? I'm fairly certain it is possible in tensorflow and I'm working on that now. But if it worked in a keras based framework, that would be brilliant.
It is certainly possible. It would be great if you could provide pseudo algorithm for the step function. (Also mention the states and their shapes).
I am working on this. Will give it to you shortly.
Pseudocode is something like :
def GuessMaxOfGaussian(X_tr, num_iterations=5, batch_size=10, timesteps=100):
# X_tr is the data of size (batch_size, timesteps, 1)
initial_guess = timesteps // 2 # Guess that the max is found at 50, like binary search
dx = zeros((batch_size, 1))
# in the previous post, for one example, I said 50 -> 75 -> 87 -> 84.
# So dx would be 0 (initial), 25, 37, 34 for that sample
# rnns can be defined this way, see the Truncated Backpropagation in
# https://www.tensorflow.org/versions/r0.11/tutorials/recurrent/index.html
for step in range(num_iterations):
sample = X_tr[:, initial_guess+dx] # make a guess
with tf.variable_scope('rnn', reuse=step>0) as scope:
hidden_state = slim.fully_connected(patches, 1, activation_fn=tf.nn.tanh)
prediction = slim.fully_connected(hidden_state, 1)
# assume : prediction is a number between 0 and 1
dx = dx + prediction*timesteps
return inits+dx
Ohhh. I understand a bit better now. In tensorflow, it looks like RNNs are a way to enforce tied weights. In tensorflow, we can do things like this with the RNN :
for step in range(num_iterations):
prediction = do_something(init+prediction)
So we could implement the same thing in Keras if we explicitly did :
prediction = do_something(init)
prediction1 = do_something(init+prediction)
prediction2 = do_something(init+prediction+prediction1)
and ensure that all the do_somethings (which could be any Sequential model) have tied weights. But I'm guessing this isn't possible.
Its very much possible. And pretty straight forward. You just have to write the appropriate RNNCell.
(Taken from https://www.tensorflow.org/versions/r0.11/tutorials/recurrent/index.html)
The reason I ask is that using this style, we can take the ouput of a time step of an LSTM and feed it into the input of the next time step, like in this code :
(taken from https://github.com/trigeorgis/mdm/blob/master/mdm_model.py). Notice in this code that
prediction
, which is the output of the rnn at each timestep is added todx
, anddx
is used earlier in this linepatches = tf.image.extract_patches(images, tf.constant(patch_shape), inits+dx)