Open kayween opened 1 year ago
We've been planning a feature to let users control the "vectorization" factor of the jacobian computation (https://github.com/pytorch/functorch/issues/680). At one extreme, one can compute the jacobian row-by-row. At the other extreme we can use vmap to turn the for-loop into a vectorized computation for more performance (at the cost of using more peak memory).
So there is a performance <-> memory tradeoff here. Today functorch.jacrev
goes the vectorized extreme and torch.autograd.functional.jacobian
is at the for-loop extreme. I'm curious -- does using torch.autograd.functional.jacobian instead resolve the high memory usage?
The output of ResidualFunctional.residual is a tensor of size (10000, ) and inputs is a tensor of size (1001, ). Thus, the Jacobian is 10000 by 1001, which takes about 74 MB using double precision.
If the output size is much greater than the input size, then it's likely that functorch.jacfwd
will be more efficient. Have you tried running that instead of jacrev?
Yes, I have tried the functorch.jacfwd
as well. But it does not solve the memory issue unfortunately :(
I understand that forward-mode autodiff is faster than reverse-mode, if the input size is smaller than the output size. But is forward-mode also more memory efficient?
The output of ResidualFunctional.residual is a tensor of size (10000, ) and inputs is a tensor of size (1001, ). Thus, the Jacobian is 10000 by 1001, which takes about 74 MB using double precision.
If the output size is much greater than the input size, then it's likely that
functorch.jacfwd
will be more efficient. Have you tried running that instead of jacrev?
Yeah, torch.autograd.functional.jacobian
does work, but it is too slow. In fact, that is exactly why I was trying to get functorch
working. I was hoping that functorch
can compute the jacobian faster than torch.autograd
.
We've been planning a feature to let users control the "vectorization" factor of the jacobian computation (#680). At one extreme, one can compute the jacobian row-by-row. At the other extreme we can use vmap to turn the for-loop into a vectorized computation for more performance (at the cost of using more peak memory).
So there is a performance <-> memory tradeoff here. Today
functorch.jacrev
goes the vectorized extreme andtorch.autograd.functional.jacobian
is at the for-loop extreme. I'm curious -- does using torch.autograd.functional.jacobian instead resolve the high memory usage?
I have a general question about automatic differentiation.
I have a code base that computes the jacobian of the above function manually (derive the math expression of the jacobian and type it in the code), and the "manual differentiaion" does not have memory issue on a 24 GB GPU.
Theoretically, does automatic differentiation have to cost more memory than manual differentiation when computing jacobian of vector functions? It looks like automatic differentiation needs to store all intermediate matrices and therefore might be more memory consuming?
Theoretically, does automatic differentiation have to cost more memory than manual differentiation when computing jacobian of vector functions? It looks like automatic differentiation needs to store all intermediate matrices and therefore might be more memory consuming?
It depends on what exactly manual differentiation is. But yes reverse-mode AD needs to store intermediates and this will increase the memory usage.
Hi,
I implemented a Jacobian computation using functorch, but encoutnered a memory overflow issue.
The function that I want to differentiate is
ResidualFunctional.residual
. I'd like to compute the Jacobian of this function w.r.t. its first argumentinputs
.The output of
ResidualFunctional.residual
is a tensor of size (10000, ) andinputs
is a tensor of size (1001, ). Thus, the Jacobian is 10000 by 1001, which takes about 74 MB using double precision.However,
functorch.jacrev
had a memory overflow error on a 24 GB GPU. The error message is shown below. I am wondering why FuncTorch takes so much memory in the reverse mode autodiff, and if there is a solution to this issue.Below is a working example that reproduce this issue.
CUDA 11.4 FuncTorch 1.13.0 PyTorch 1.13.0 GPyTorch 1.9.0
Thanks!