Open ChevalierOhm opened 1 month ago
https://github.com/PaddlePaddle/Paddle/blob/131999233ef997fc8d3f24b27830925b78cf17aa/python/paddle/base/dygraph/base.py#L610
我的代码如下: import paddle import numpy as np
def transform(x): return 4 * x ** 3 + 1
x = paddle.rand(shape=[5], dtype=np.float32) x.stop_gradient = False
outputs = transform(x)
dudx = paddle.grad(outputs, x, grad_outputs=paddle.ones_like(x), retain_graph=True, create_graph=True)[0] print(dudx) d2udx2 = paddle.grad(dudx, x, grad_outputs=paddle.ones_like(x), retain_graph=True, create_graph=True)[0] print(d2udx2)
上述代码会报错,是哪里有问题吗
报错: Error Traceback (most recent call last)/tmp/ipykernel_95/2675772064.py in 12 dudx = paddle.grad(outputs, x, grad_outputs=None, create_graph=True)[0] 13 print(dudx) ---> 14 d2udx2 = paddle.grad(dudx, x, grad_outputs=None, create_graph=True)[0] 15 print(d2udx2)
你好,在目前develop版本上未复现出这个问题,可以更新下paddle
https://github.com/PaddlePaddle/Paddle/blob/131999233ef997fc8d3f24b27830925b78cf17aa/python/paddle/base/dygraph/base.py#L610
我的代码如下: import paddle import numpy as np
构建一个函数
def transform(x): return 4 * x ** 3 + 1
x = paddle.rand(shape=[5], dtype=np.float32) x.stop_gradient = False
print(x)
outputs = transform(x)
print(outputs)
使用paddle.grad命令计算二阶微分
dudx = paddle.grad(outputs, x, grad_outputs=paddle.ones_like(x), retain_graph=True, create_graph=True)[0] print(dudx) d2udx2 = paddle.grad(dudx, x, grad_outputs=paddle.ones_like(x), retain_graph=True, create_graph=True)[0] print(d2udx2)
上述代码会报错,是哪里有问题吗