import numdifftools
import numpy as np
print(numdifftools.__version__) # out: 0.9.39
def f(x):
v = sum(x)
return np.r_[v, 2*v, 3*v]
J = numdifftools.Jacobian(f)
input_2d = np.r_[1,2]
print(np.shape(J(input_2d))) # out: (3,2) -- OK
input_1d = np.r_[1]
print(np.shape(J(input_1d))) # out: (1,3) -- I would expect (3,1)
The Jacobian evaluation transposes the result for one-dimensional, but vector-valued, inputs. I see that the derivative with respect to a scalar (which is almost the case for input_1d) may be defined differently. As the code above specifically requests the Jacobian, however, I was wondering if this behavior is intended or may show a inconsistency/bug.
The
Jacobian
evaluation transposes the result for one-dimensional, but vector-valued, inputs. I see that the derivative with respect to a scalar (which is almost the case forinput_1d
) may be defined differently. As the code above specifically requests theJacobian
, however, I was wondering if this behavior is intended or may show a inconsistency/bug.