awjuliani / DeepRL-Agents

A set of Deep Reinforcement Learning Agents implemented in Tensorflow.
MIT License
2.24k stars 826 forks source link

why the apply_gradient could be totally externally assigned? #1

Closed Marsan-Ma-zz closed 8 years ago

Marsan-Ma-zz commented 8 years ago

I want to ask something about this part in policy network example:

loss = -tf.reduce_mean((tf.log(input_y - probability)) * advantages) 
newGrads = tf.gradients(loss,tvars)

adam = tf.train.AdamOptimizer(learning_rate=learning_rate) # Our optimizer
W1Grad = tf.placeholder(tf.float32,name="batch_grad1") # Placeholders to send the final gradients through when we update.
W2Grad = tf.placeholder(tf.float32,name="batch_grad2")
batchGrad = [W1Grad,W2Grad]
updateGrads = adam.apply_gradients(zip(batchGrad,tvars))

where newGrads go out-of-graph, after postprocessing, then inserted to graph through W1Grad and W2Grad.

I am wondering how tensorflow knows the Variable gradient will be put to batchGrad?

I mean, if we just tf.apply_gradient to random placeholders which are not evaluated by some trainable tf.Variable, it should cast the Error: "No gradients provided for any variable".

Your code works, but I am still trying to figure out how it works?

awjuliani commented 8 years ago

Hi Marsan,

I agree that this isn't necessarily the best way to handle this. What it is doing is allowing us to collect the gradients outside of the graph over time, and then apply them to the correct variables. Because it has already computed the exact gradient values, and there are only two trainable variables, when we use zip(batchGrad,tvars) we are letting Tensorflow know which variables should receive which gradients. The process of actually applying them is just a matter of them being multiplied by the learning rate, and then added to the weights directly, so no extra computation needs to be done using the graph logic. I hope that makes sense.

One could alternatively collect the experience traces over multiple episodes outside the graph, and do the gradient computation all at once within the graph. I may end up rewriting this algorithm, as it was one of the earliest ones, and aspects of it such as this are less than ideal.

Marsan-Ma-zz commented 8 years ago

hi awjuliani,

Thanks for your explanation! I know you want to collect the gradients and apply later, I want to do the same but I can't figure out how to pass the design rule check that "apply_gradient should have some tf.Variable to train".

Here I want to do the same as yours, but I need to workaround like the following to make tensorflow "thought" that apply_gradient do connected with some tf.Variable in graph.

self.losses = control_flow_ops.cond( 
      self.en_ext_losses,  
      lambda: self.ext_losses,  
      lambda: self.losses,  
)  

Where the original self.losses is evaluated from the graph, so tensorflow pass the rule check. Then the self.ext_losses is the out-of-graph loss which later been injected while reinforcement learning episodes finished.

That's an ugly workaround, and that's why I am trying to figure out how why yours work.

awjuliani commented 8 years ago

I am actually unsure of how to compute a loss from outside the graph in your case. My suggestion would be to find a way to tie it into the graph, if possible. The problem is that Tensorflow has no way of dealing with the loss, since there is no way of assigning credit for the loss to any variables in the graph since the loss didn't come from the graph itself, or pass through a differentiable function in the graph.

I hope you can figure out a solution, and wish you luck.