Open giangbang opened 8 months ago
Hi, I'm using autograd to calculate the gradient of an l2 norm operator, the code is as simple as
def f(x): return np.linalg.norm(x,axis=-1)**2 f_dx = grad(f) f_dx(np.array([[0, 0.]]))
However, when I substitute vector 0 to f, it outputs nan
nan
>> \autograd\numpy\linalg.py:100: RuntimeWarning: invalid value encountered in scalar divide return expand(g / ans) * x
When I change the code to something that does not involve using linalg, it produces 0 as usual
def f(x): return np.sum(np.square(x)) f_dx = grad(f) f_dx(np.array([[0, 0]], dtype=float))
array([[0., 0.]])
https://github.com/HIPS/autograd/blob/9a90bd6172d1882235c326c56c17a9540357d86b/autograd/numpy/linalg.py#L100
Hi, I'm using autograd to calculate the gradient of an l2 norm operator, the code is as simple as
However, when I substitute vector 0 to f, it outputs
nan
When I change the code to something that does not involve using linalg, it produces 0 as usual
https://github.com/HIPS/autograd/blob/9a90bd6172d1882235c326c56c17a9540357d86b/autograd/numpy/linalg.py#L100