Just taking a random browse through the code here, and I'm curious about this function:
def odegrad(c_n, t, Ka_n, x_Ln, x_R):
N = c_n.size
d2c = numpy.zeros([N,N], numpy.float64)
for n in range(N):
d2c[n,:] = -Ka_n[n] * (x_Ln[n]/V - c_n[n])
d2c[n,n] += -(Ka_n[n] * (x_R/V - c_n[:].sum()) + 1.0)
return d2c
in models.py. Would it be better to use a library that does automatic differentiation here, both for efficiency and accuracy? Admittedly somewhat low priority at the moment, but I thought I'd note it in case it turns out to be worthwhile.
Just taking a random browse through the code here, and I'm curious about this function:
in
models.py
. Would it be better to use a library that does automatic differentiation here, both for efficiency and accuracy? Admittedly somewhat low priority at the moment, but I thought I'd note it in case it turns out to be worthwhile.