jtkim-kaist / ram_modified

"Recurrent Models of Visual Attention" in TensorFlow
42 stars 9 forks source link

It seems your code still has issue.(Don't quiet understand your solution) #1

Open machanic opened 7 years ago

machanic commented 7 years ago

this is the loss function: J = tf.concat(values=[tf.log(p_y + SMALL_NUM) (onehot_labels_placeholder), tf.log(p_loc + SMALL_NUM) (R - no_grad_b)],axis=1) # axis = 1 acctually concat columns p_loc is made by : p_loc = gaussian_pdf(mean_locs, sampled_locs) # ?? mean_locs is not stop_gradient, but sampled_locs is stop_gradient your code seems just stop_gradient sampled_locs and not stop_gradient in mean_locs? your seperate 2 parts is not seen from the your code?

1. location network, baseline network : learn with gradients of reinforcement learning only.

2. glimpse network, core network : learn with gradients of supervised learning only.

jtkim-kaist commented 7 years ago

Sorry for the late answer because I'm too busy in these days.

The separation means that 'not sharing gradients'. In this code, you can see that gradients of reinforce part only flow through the location & baseline networks.

  1. your code seems just stop_gradient sampled_locs and not stop_gradient in mean_locs? : No, it is already implemented in original RAM implementation and I think it is correct.

  2. your seperate 2 parts is not seen from the your code? Plz see my code carefully. You can find that gradient flow was separated between (location, baseline net) and glimpse, core network

I'll give you more detailed answer as soon as possible...