I've been working with odeint where the input is a neural network and a vector t. The output is a tensor with an additional dimension for t in front of the dimensions of the neural network's output.
My goal is to focus on the output at the final time step and compute the loss against the true value. In this context, do the outputs at the previous time steps have any significance? Specifically, how is the gradient propagated through these intermediate outputs?
Hello,
I've been working with
odeint
where the input is a neural network and a vectort
. The output is a tensor with an additional dimension for t in front of the dimensions of the neural network's output.My goal is to focus on the output at the final time step and compute the loss against the true value. In this context, do the outputs at the previous time steps have any significance? Specifically, how is the gradient propagated through these intermediate outputs?
Thank you for your help!