Open sai-karthikeya-vemuri opened 3 years ago
The function fwd_gradients_0
is used to calculate the gradients involved in the nonlinear function N in Eq.(7), refer the literature. This is used in the function net_U0
which outputs the first term involved in _SSEn (Eq.(12)),
The dummy variables are optional and just used to hold the computed gradients, refer the documentation.
Additionally, note that the function fwd_gradients_1
is unused as there is no gradient term involved in the function net_U1
, which outputs the terms involved in in _SSEb (Eq.(12)),
Is it not sufficient to directly take gradients of U0 w.r.t x?
I didn't get what you mean by taking the gradient of U0
as it is used in _SSEn without any further manipulation required, see code.
Thank you for the reply,
What I meant was instead of using a function and dummy variable to calculate the gradients, is it not sufficient to use "tf.gradients(U0,x)"
The function fwd_gradients_0
performs the automatic differentiation in forward mode where the Jacobain vector products (jvps) are calculated by composing two reverse mode vector Jacobian products (vjps), hence the double usage of tf.gradients
. Refer following links for more details:
https://github.com/renmengye/tensorflow-forward-ad/issues/2
Okay , Thank you , I get the approach.
But I still don't understand why we have to perform forward gradients instead of standard reverse gradients.
My problem is that I am solving the same problem with a custom Autodiff framework that only supports reverse gradients and when I call it twice using a dummy variable I am getting zero.
We use forward gradient because (from Wikipedia):
Forward accumulation is more efficient than reverse accumulation for functions f : ℝn → ℝm with m ≫ n as only n sweeps are necessary, compared to m sweeps for reverse accumulation.
In principle, the reverse gradient should be able to produce the desired derivatives. Take care of the arguments that are passed to your custom autodiff function which is different for the forward and reverse modes. The reverse gradient only requires one sweep of the computational graph but might be less efficient in our case.
Yes, In principle, there shouldn't be any difference between forward and reverse gradients But,
In the case of reverse gradients, i.e tf.grad(U0,X) , the output will be of shape X i.e N*1
In the case of forward gradients, as mentioned in the code, the output will be of the shape of the dummy Variable (i.e, shape of U0) i.e N*q
Hence, further construction of SSE will yield different results for fwd and reverse gradients.
Edit: Can you please elaborate on what you mean by
Take care of the arguments that are passed to your custom autodiff function which is different for the forward and reverse modes. The reverse gradient only requires one sweep of the computational graph but might be less efficient in our case.
Is it not sufficient to directly take gradients of U0 w.r.t x? I think i am missing something here.. much obliged if someone could clear it.
Thanks