PaddlePaddle / Paddle

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
http://www.paddlepaddle.org/
Apache License 2.0
21.78k stars 5.47k forks source link

paddle.grad无法计算高阶微分 #64386

Open ChevalierOhm opened 1 month ago

ChevalierOhm commented 1 month ago

https://github.com/PaddlePaddle/Paddle/blob/131999233ef997fc8d3f24b27830925b78cf17aa/python/paddle/base/dygraph/base.py#L610

我的代码如下: import paddle import numpy as np

构建一个函数

def transform(x): return 4 * x ** 3 + 1

x = paddle.rand(shape=[5], dtype=np.float32) x.stop_gradient = False

print(x)

outputs = transform(x)

print(outputs)

使用paddle.grad命令计算二阶微分

dudx = paddle.grad(outputs, x, grad_outputs=paddle.ones_like(x), retain_graph=True, create_graph=True)[0] print(dudx) d2udx2 = paddle.grad(dudx, x, grad_outputs=paddle.ones_like(x), retain_graph=True, create_graph=True)[0] print(d2udx2)

上述代码会报错,是哪里有问题吗

ChevalierOhm commented 1 month ago

报错: Error Traceback (most recent call last)/tmp/ipykernel_95/2675772064.py in 12 dudx = paddle.grad(outputs, x, grad_outputs=None, create_graph=True)[0] 13 print(dudx) ---> 14 d2udx2 = paddle.grad(dudx, x, grad_outputs=None, create_graph=True)[0] 15 print(d2udx2)

in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused, no_grad_vars) /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py in __impl__(func, *args, **kwargs) 24 def __impl__(func, *args, **kwargs): 25 wrapped_func = decorator_func(func) ---> 26 return wrapped_func(*args, **kwargs) 27 28 return __impl__ /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py in __impl__(*args, **kwargs) 545 % func.__name__ 546 ) --> 547 return func(*args, **kwargs) 548 549 return __impl__ /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused, no_grad_vars) 689 retain_graph, create_graph, 690 only_inputs, allow_unused, --> 691 no_grad_vars) 692 else: 693 place = core.Place() ValueError: (InvalidArgument) The 0-th input does not appear in the backward graph. Please check the input tensor or set allow_unused=True to get None result. [Hint: Expected allow_unused == true, but received allow_unused:0 != true:1.] (at /paddle/paddle/fluid/eager/general_grad.h:471)
zoooo0820 commented 1 month ago

你好,在目前develop版本上未复现出这个问题,可以更新下paddle