Open MaximilianPi opened 1 month ago
Hi @dfalbel,
Why can I calculate the first but not the second order derivative of gamma samples? It would be really great if I could .
alpha = torch_tensor(0.5, requires_grad = TRUE) beta = torch_tensor(0.5, requires_grad = TRUE) sample = torch::distr_gamma(alpha, beta)$rsample(10L) loss = sample$sum() g = torch::autograd_grad(loss, alpha, retain_graph = TRUE, create_graph = TRUE) gg = torch::autograd_grad(g[[1]], alpha)
Error in cpp_autograd_grad(outputs, inputs, grad_outputs, retain_graph, : the derivative for '_standard_gamma_grad' is not implemented. Exception raised from not_implemented_base at /Users/dfalbel/Documents/actions-runner/mlverse-m1/_work/libtorch-mac-m1/libtorch-mac-m1/pytorch/torch/csrc/autograd/FunctionsManual.cpp:110 (most recent call first): frame #0: at::Tensor torch::autograd::generated::details::not_implemented_base<at::Tensor>(char const*, char const*) + 216 (0x15d348ab0 in libtorch_cpu.dylib) frame #1: torch::autograd::generated::StandardGammaGradBackward0::apply(std::__1::vector<at::Tensor, std::__1::allocator<at::Tensor>>&&) + 88 (0x15b9b54d4 in libtorch_cpu.dylib) frame #2: torch::autograd::Node::operator()(std::__1::vector<at::Tensor, std::__1::allocator<at::Tensor>>&&) + 120 (0x15c95b078 in libtorch_cpu.dylib) frame #3: torch::autograd::Engine::evaluate_function(std::__1::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::__1::shared_ptr<torch::autograd::ReadyQueue> const&) + 2932 (0x1
It looks like it's not implemented yet in torch yet, unfortunately:
https://github.com/pytorch/pytorch/blob/980f5ac0499447cd6d39b6241ae30ae30937a3e5/tools/autograd/derivatives.yaml#L1926-L1927
Hi @dfalbel,
Why can I calculate the first but not the second order derivative of gamma samples? It would be really great if I could .