Closed ianhbell closed 3 years ago
Change your alphar
method as follows:
template<typename TType, typename RhoType>
auto alphar(const TType &T, const RhoType& rho) const -> TType {
auto rhotot = std::accumulate(std::begin(rho), std::end(rho), static_cast<typename RhoType::value_type>(0.0));
auto Psiminus = -log(1.0 - b * rhotot);
auto Psiplus = rhotot;
return Psiminus - a / (R * T) * Psiplus;
}
Changes
Make sure you specify the return type as TType
. Reason: Operations with autodiff::dual
produce a compile-time expression. This expression may contain references to objects in the function scope. By specifying the return type to TType
(or autodiff::dual
), you force the expression to be evaluated at the exit of the function. This is very much like what happens to Eigen as well (try to return an Eigen expression involving vectors/matrices constructed inside a function!). There is no way around this -- well, there is: using shared pointers, but the performance would be ridiculous for forward differentiation.
I had to use static_cast<typename RhoType::value_type>(0.0)
instead of your (RhoType::value_type)0.0
.
This compiled fine with clang++ v11
and g++ v7.3
.
I would be interested to learn more about your equation of state based on residual Helmholtz energy! ;) Can this be applied to any solution (e.g., electrolyte solutions)?
That indeed did seem to get it to work, thanks! Complete code below. Output: 1.05879e-22 diff, absolute
. Derivative's value is order of 1e-5, so difference at level of numerical precision, great!
If I need to take derivatives with respect to molar concentrations, I need the return type to be the container type of RhoType::value_type
. My general treatment has been to do something like: using ReturnType = decltype(T*rho[0])
inside the function body, but I can't apply the same treatment to the return type. Is there a concise way to do the same thing to the return type of the function?
P.S. Can you shoot me an email at ian.bell@nist.gov and we can chat EOS to your heart's content?
#include <iostream>
#include <algorithm>
#include <numeric>
#include <valarray>
#include <complex>
#include "autodiff/forward.hpp"
#include "autodiff/reverse.hpp"
/* A (very) simple implementation of the van der Waals EOS*/
class vdWEOSSimple {
private:
double a, b;
public:
vdWEOSSimple(double a, double b) : a(a), b(b) {};
const double R = 1.380649e-23 * 6.02214076e23; ///< Exact value, given by k_B*N_A
template<typename TType, typename RhoType>
TType alphar(const TType &T, const RhoType& rho) const {
auto rhotot = std::accumulate(std::begin(rho), std::end(rho), static_cast<typename RhoType::value_type>(0.0));
auto Psiminus = -log(1.0 - b * rhotot);
auto Psiplus = rhotot;
return Psiminus - a / (R * T) * Psiplus;
}
};
void test_vdW_autodiff() {
// Argon + Xenon
std::valarray<double> Tc_K = { 150.687, 289.733 };
std::valarray<double> pc_Pa = { 4863000.0, 5842000.0 };
double T = 298.15;
auto rho = 3.0;
auto R = 1.380649e-23 * 6.02214076e23; ///< Exact value, given by k_B*N_A
auto rhotot = rho;
const std::valarray<double> rhovec = { rhotot / 2, rhotot / 2 };
int i = 0;
double ai = 27.0/64.0*pow(R*Tc_K[i], 2)/pc_Pa[i];
double bi = 1.0/8.0*R*Tc_K[i]/pc_Pa[i];
vdWEOSSimple vdW(ai, bi);
autodiff::dual varT = T;
auto u = vdW.alphar(varT, rhovec);
int rr = 0;
auto dalphardT = derivative([&vdW, &rhovec](auto& T) {return vdW.alphar(T, rhovec); }, wrt(varT), at(varT));
double h = 1e-100;
auto dalphardT_comstep = vdW.alphar(std::complex<double>(T,h), rhovec).imag()/h;
std::cout << dalphardT-dalphardT_comstep << " diff, absolute" << std::endl;
}
int main() {
test_vdW_autodiff();
return EXIT_SUCCESS;
}
Compiler didn't like this:
template<typename TType, typename RhoType>
auto alphar(const TType &T, const RhoType& rho) const -> decltype(T*rho[0]){
auto rhotot = std::accumulate(std::begin(rho), std::end(rho), static_cast<typename RhoType::value_type>(0.0));
auto Psiminus = -log(1.0 - b * rhotot);
auto Psiplus = rhotot;
return Psiminus - a / (R * T) * Psiplus;
}
as recommended by https://stackoverflow.com/questions/57464396/how-to-get-resultant-type-of-multiplying-two-different-types
My general treatment has been to do something like: using ReturnType = decltype(T*rho[0]) inside the function body, but I can't apply the same treatment to the return type. Is there a concise way to do the same thing to the return type of the function?
Can you point me to a code where you do this? I'm not sure I understood you here.
I had a quick look at your lib and I saw many methods without an explicit return type. This is fine if you are dealing with double
, because the compiler will automatically detect your return type as double
. But if you are using autodiff::dual
, then this may produce a mathematical expression involving dual numbers. An explicit return type will ensure the expression is evaluated before leaving the function scope.
If you are returning a vector, make sure this is a vector of numbers with the same type as the type of the argument for which you want to compute derivatives (it seems you want a vector of TType
, because this is the type of temperature). Your mole fractions should also have type TType
.
decltype(T*rho[0])
If T
and rho[0]
is of type double
, then decltype(T*rho[0]) === double
.
If they are dual
, then decltype(T*rho[0]) === autodiff::BinaryExpression<dual, dual>
.
Why not use a common number type across your entire lib? This will simplify your life. ;)
Then, just make sure you explicitly specify the return type as TType
, or you do this inside your function:
template<typename TType>
auto f(const TType& T)
{
return TType(1.0 + T + T*T);
}
I would do this:
template<typename TType>
auto f(const TType& T) -> TType
{
return 1.0 + T + T*T;
}
P.S. Can you shoot me an email at ian.bell@nist.gov and we can chat EOS to your heart's content?
Great, I'll contact you at some point. I think we have lots of overlapping interests.
In general, at most one of the arguments to alphar will have a non-double numerical type (rho will be a vector-like container). But you don't know a priori which argument will be double numerical type.
decltype(T*rho[0])
with rho
a std::valarray<double>
and T
a double should be ok, right? Or do you end up with autodiff::BinaryExpression<dual, double>
?
In general, at most one of the arguments to alphar will have a non-double numerical type (rho will be a vector-like container). But you don't know a priori which argument will be double numerical type.
That's why I'm suggesting a common number type for everything.
decltype(T*rho[0]) with rho a std::valarray
and T a double should be ok, right? Or do you end up with autodiff::BinaryExpression<dual, double>?
~You'll end up with double
, because autodiff will not mess around with operator* for double-double.~
Is T
of dual
type? If so, yes, you'll get the binary expression.
Here is an example of how I do things in a function: https://github.com/ianhbell/teqp/blob/f9ce9be8e0f3bfc24b35dfd639eaa600cdb514fa/include/teqp/models/multifluid.hpp#L34
In general, at most one of the arguments to alphar will have a non-double numerical type (rho will be a vector-like container). But you don't know a priori which argument will be double numerical type.
That's why I'm suggesting a common number type for everything.
decltype(T*rho[0]) with rho a std::valarray and T a double should be ok, right? Or do you end up with autodiff::BinaryExpression<dual, double>?
You'll end up with
double
, because autodiff will not mess around with operator* for double-double.
Whoops I meant here a std::valarray<dual>
and a double for T
Thanks for the link. It's clear now how you do this with the resulttype
below:
template<typename TauType, typename DeltaType, typename MoleFractions>
auto alphar(const TauType& tau, const DeltaType& delta, const MoleFractions& molefracs) const {
using resulttype = std::remove_const<decltype(tau* delta* molefracs[0])>::type; // Type promotion, without the const-ness
resulttype alphar = 0.0;
auto N = molefracs.size();
for (auto i = 0; i < N; ++i) {
alphar = alphar + molefracs[i] * EOSs[i].alphar(tau, delta);
}
return alphar;
}
I would stick with just one template number type and explicitly use double
here and there (say, for constants or things I don't care to take derivatives for later on).
This is how I would implement that function:
template<typename Real, typename MoleFractions>
auto alphar(const Real& tau, const Real& delta, const MoleFractions& molefracs) const -> Real
{
Real alphar = 0.0;
auto N = molefracs.size();
for (auto i = 0; i < N; ++i) {
alphar = alphar + molefracs[i] * EOSs[i].alphar(tau, delta);
}
return alphar;
}
Note here that you don't need -> Real
as return type because you are not returning an expression (e.g., a+b*x
), but alphar
, which is Real
.
This is how I would implement that function:
template<typename Real, typename MoleFractions> auto alphar(const Real& tau, const Real& delta, const MoleFractions& molefracs) const -> Real { Real alphar = 0.0; auto N = molefracs.size(); for (auto i = 0; i < N; ++i) { alphar = alphar + molefracs[i] * EOSs[i].alphar(tau, delta); } return alphar; }
But that forces tau
and delta
to have the same numerical type, but that's not always true.
This is how I would implement that function:
template<typename Real, typename MoleFractions> auto alphar(const Real& tau, const Real& delta, const MoleFractions& molefracs) const -> Real { Real alphar = 0.0; auto N = molefracs.size(); for (auto i = 0; i < N; ++i) { alphar = alphar + molefracs[i] * EOSs[i].alphar(tau, delta); } return alphar; }
But that forces
tau
anddelta
to have the same numerical type, but that's always true.
Besides, that function doesn't work with autodiff, but that's a different problem.
BTW I'm also assuming that MoleFractions
is a vector of Real
numbers.
But that forces tau and delta to have the same numerical type, but that's always true.
Choose one type name that you know it will be a type for automatic differentiation, say, Real
.
Choose another type name that you know it may be double
in general, but you may want to change to something else later. Name this Double
.
I would keep it simple, though, and go with just Real
. Later on, when things are working correctly, you start changing this (optimizing on the choice of number types).
But maybe I'm missing something. Why do you have a different template type for every argument (TType
, TauType
, DeltaType
)?
Thing is, I have already gotten this working with MultiComplex<double>
(which was tricky enough), so I am trying to minimize major updates to type system in the top-level interface (though willing to make any changes that are necessary).
I would love to have a single numerical type, but I believe I can't because I need to be able to mix and match double
, std::complex<double>
, MultiComplex<double>
, autodiff::dual
, etc. The array thing in rho
needs to be able to support all the numerical types as well.
Type promotion would maybe allow for more things to use the same numerical type, but I don't think that will work in all cases. I've simplified the problem a lot for this test, and maybe I have simplified it too much to make the complexities evident.
OK, I see your point. In this case, I would just recommend you to make sure your return type is an autodiff::dual
(not an expression).
I have another branch of autodiff
, in which there is a type called real
. It also implements high-order forward derivatives. real
does not produce compile-time expressions like dual
.
It's the develop
branch in my own fork. Have a look at the examples at https://github.com/allanleal/autodiff/tree/develop
This is the branch I'm actually using for my other libraries. There are tons of changes there that I have yet not merged here. Hopefully soon! :)
Check this cubic EOS implementation using autodiff::real
https://github.com/allanleal/reaktoro/blob/v2.0-preview-equilibrium-sensitivity/Reaktoro/Thermodynamics/Fluids/CubicEOS.cpp
OK, I see your point. In this case, I would just recommend you to make sure your return type is an
autodiff::dual
(not an expression).I have another branch of
autodiff
, in which there is a type calledreal
. It also implements high-order forward derivatives.real
does not produce compile-time expressions likedual
.
Don't think I can do that in general, because what if I put in arguments of double, std::valarray<double>
? In that case I should get a double
as a return type, not an autodiff::dual
I dunno, it's a tricky problem to do this very generically, I'm not trying to sugarcoat that. Maybe we can have a chat to figure out a sensible approach to templated C++ evaluation that is autodiff friendly.
Don't think I can do that in general, because what if I put in arguments of double, std::valarray
? In that case I should get a double as a return type, not an autodiff::dual
What I meant is that you avoid a resulttype
that is obtained via decltype
on an expression. But again, I'm not sure how easy this will be in your specific case. Your resultype
should be such that it's a common type among the types you are operating on. For example, if there is TType
and complex<TType>
, then the result type should be complex<TType>
I dunno, it's a tricky problem to do this very generically, I'm not trying to sugarcoat that. Maybe we can have a chat to figure out a sensible approach to templated C++ evaluation that is autodiff friendly.
Sounds good to me!
Do give the autodiff::real
type a try later. This should solve this issue on the return type. I'll check new messages later.
Whoah your example code is very much up my alley: https://github.com/allanleal/reaktoro/blob/v2.0-preview-equilibrium-sensitivity/Reaktoro/Thermodynamics/Fluids/CubicEOS.cpp
Hi Ian, very good. I'll close this issue now, as it seems the compilation problem could be resolved. Let's discuss via email and chat about the other topics we mentioned above.
Wow, differentiation with the autodiff::real is very fast.
#include <iostream>
#include <algorithm>
#include <numeric>
#include <valarray>
#include <complex>
#include <chrono>
#include <autodiff/forward/real/real.hpp>
#include <autodiff/forward/real.hpp>
/* A (very) simple implementation of the van der Waals EOS*/
class vdWEOSSimple {
private:
double a, b;
public:
vdWEOSSimple(double a, double b) : a(a), b(b) {};
const double R = 1.380649e-23 * 6.02214076e23; ///< Exact value, given by k_B*N_A
template<typename TType, typename RhoType>
auto alphar(const TType &T, const RhoType& rho) const -> decltype(T*rho[0]){
auto rhotot = std::accumulate(std::begin(rho), std::end(rho), static_cast<typename RhoType::value_type>(0.0));
auto Psiminus = -log(1.0 - b * rhotot);
auto Psiplus = rhotot;
return Psiminus - a / (R * T) * Psiplus;
}
};
void test_vdW_autodiff() {
// Argon + Xenon
std::valarray<double> Tc_K = { 150.687, 289.733 };
std::valarray<double> pc_Pa = { 4863000.0, 5842000.0 };
double T = 298.15;
auto rho = 3.0;
auto R = 1.380649e-23 * 6.02214076e23; ///< Exact value, given by k_B*N_A
auto rhotot = rho;
const std::valarray<double> rhovec = { rhotot / 2, rhotot / 2 };
int i = 0;
double ai = 27.0/64.0*pow(R*Tc_K[i], 2)/pc_Pa[i];
double bi = 1.0/8.0*R*Tc_K[i]/pc_Pa[i];
vdWEOSSimple vdW(ai, bi);
double h = 1e-100;
autodiff::real varT = T;
auto ticn1 = std::chrono::steady_clock::now();
auto u = vdW.alphar(T, rhovec);
auto tic0 = std::chrono::steady_clock::now();
auto dalphardT = derivative([&vdW, &rhovec](auto& T) {return vdW.alphar(T, rhovec); }, wrt(varT), at(varT));
auto tic1 = std::chrono::steady_clock::now();
auto dalphardT_comstep = vdW.alphar(std::complex<double>(T,h), rhovec).imag()/h;
auto tic2 = std::chrono::steady_clock::now();
std::cout << dalphardT-dalphardT_comstep << " diff, absolute" << std::endl;
std::cout << std::chrono::duration<double>(tic0 - ticn1).count() << std::endl;
std::cout << std::chrono::duration<double>(tic1 - tic0).count() << std::endl;
std::cout << std::chrono::duration<double>(tic2 - tic1).count() << std::endl;
}
int main() {
test_vdW_autodiff();
return EXIT_SUCCESS;
}
yields (admittedly not a very solid timing approach)
1.05879e-22 diff, absolute
1e-07
0
1.4e-06
Thanks for sharing this, Ian. Did the autodiff::real
type solved that issue you had with the return type convention you need to adhere to?
Indeed it did solve the problem, but now I need higher derivatives, and I can't use the same approach. Some thinking and extension of autodiff might be needed, I fear.
autodiff::real
supports higher-order derivatives. BUT, these are directional derivatives as you may have seen in the examples. Say, 8th order derivative along a given vector that perturbs your vector of mole fractions. This is to allow higher-order Taylor expansions of a function along a direction.
autodiff::dual
supports higher-order cross derivatives.
Not sure if this introduces a new issue for you! :)
Yeah, the decltype
magic on the return type works with real
, but not dual
. Perhaps that is something that could be readily implemented? If so, that might be all that is needed?
Perhaps that is something that could be readily implemented? If so, that might be all that is needed?
Mathematical operations on dual
numbers produce these compile-time expressions (to avoid as much as possible memory allocation and also to enable certain expression optimizations at compile-time). Changing this (altering this way dual
works) could compromise performance. Especially for very high-order derivatives.
Yeah, that is also why Eigen is so fast (and sometimes also so challenging to debug when things go wrong). But I don't need to tell you that.
Could it be somehow forced that when an expression ends up as the return argument in derivative
with duals, the expression tree is flattened automatically like your previously proposed TType
cast of the return type? What I am trying desperately to avoid is having to do anything "smart" within the function to handle any of the numerical types.
Could it be somehow forced that when an expression ends up as the return argument in derivative with duals, the expression tree is flattened automatically like your previously proposed TType cast of the return type? What I am trying desperately to avoid is having to do anything "smart" within the function to handle any of the numerical types.
I think this could be possible. So, your function is returning an expression (e.g., BinaryExpr<dual, dual, MultOp>
) but the derivative
, jacobian
, etc. functions would be able to figure out the evaluated type of the expression. I'll see if I can do this over the next few days.
@ianhbell I made the following change in autodiff
:
template<typename Function, typename Wrt, typename Args>
auto derivative(const Function& f, Wrt&& wrt, Args&& args)
{
using Result = decltype(eval(std::apply(f, args))); // Before: using Result = decltype(std::apply(f, args));
Result u;
return derivative(f, std::forward<Wrt>(wrt), std::forward<Args>(args), u);
}
This solves the compilation issue.
However, testing the following scenario:
SECTION("testing functions without explicit return type")
{
auto f = [](dual x, dual y, dual z)
{
dual a = x * y * z;
dual b = x + y + z;
dual c = log(x);
return a*b*c;
};
dual x = 2.6;
dual y = 3.6;
dual z = 4.6;
const dual fxyz = f(x, y, z);
REQUIRE( fxyz == approx( (x * y * z)*(x + y + z)*log(x) ) );
REQUIRE( derivative(f, wrt(x), at(x, y, z)) == approx( (y * z)*(x + y + z)*log(x) + (x * y * z)*log(x) + (x * y * z)*(x + y + z)/x ) );
REQUIRE( derivative(f, wrt(y), at(x, y, z)) == approx( (x * z)*(x + y + z)*log(x) + (x * y * z)*log(x) ) );
REQUIRE( derivative(f, wrt(z), at(x, y, z)) == approx( (x * y)*(x + y + z)*log(x) + (x * y * z)*log(x) ) );
}
produces the following failure:
CHECK( fxyz == approx( (x * y * z)*(x + y + z)*log(x) ) )
with expansion:
0 == Approx( 444.3174083927 )
As you see, the evaluation of f
in const dual fxyz = f(x, y, z);
does not produce the correct result, because f
returns an expression containing references to its scope variables a
, b
, c
.
The other CHECKs succeed though. Maybe because the compiler is inlining f
inside derivative
. This is because std::apply(f, args)
is used inside derivative
. Indeed, replacing:
const dual fxyz = f(x, y, z);
by
const dual fxyz = std::apply(f, std::make_tuple(x, y, z));
results in a successful test.
The safest approach is then to explicitly evaluate the expressions involving dual
before returning them!
auto f = [](dual x, dual y, dual z)
{
dual a = x * y * z;
dual b = x + y + z;
dual c = log(x);
return eval(a*b*c);
};
Perhaps you could have an eval
function that (at compile-time) calls autodiff::eval
if autodiff
is used, or just return the argument if not:
template<typename Expr>
auto eval(const Expr& expr)
{
if constexpr (UsingAutodiff)
return autodiff::eval(expr);
else return expr;
}
Define constexpr bool UsingAutodiff = true;
before #include <autodiff/forward.hpp>
.
How about a simpler solution? In my case I am already wrapping the function in a lambda so that I have a single-variable function to offer to derivative
, so I could force the eval to happen in the lambda. I did try that with autodiff::dual2nd
, but it didn't work...
I did:
autodiff::dual2nd varT = T;
auto dalphardT = derivative([&vdW, &rhovec](auto& T) {return eval(vdW.alphar(T, rhovec)); }, wrt(varT, varT), at(varT));
(gave me the first derivative value) and also tried
autodiff::dual2nd varT = T;
auto dalphardT = derivative([&vdW, &rhovec](auto& T) {return eval(vdW.alphar(T, rhovec)); }, wrt<2>(varT), at(varT));
which didn't compile. Seems there are no examples of a single-variable function with higher-order derivatives in forward mode: https://autodiff.github.io/tutorials/#higher-order-derivatives-of-a-multi-variable-function
Try this with my branch develop
at https://github.com/allanleal/autodiff/tree/develop.
Reason: You just found a bug in the eval
method of master
! The eval
method in the develop
branch above is more solid. There is a compile-time recursion in the eval
of master that is reducing the order of the dual
number, which is not what we want here (that's why you got first-order derivative when using it). In the develop
branch, there is no recursion, no order reduction.
Full code for you (using the develop
branch) (tested with clang-11):
#include <iostream>
#include <algorithm>
#include <numeric>
#include <valarray>
#include "autodiff/forward/dual.hpp"
/* A (very) simple implementation of the van der Waals EOS*/
class vdWEOSSimple {
private:
double a, b;
public:
vdWEOSSimple(double a, double b) : a(a), b(b) {};
const double R = 1.380649e-23 * 6.02214076e23; ///< Exact value, given by k_B*N_A
template<typename TType, typename RhoType>
// auto alphar(const TType &T, const RhoType& rho) const -> TType {
auto alphar(const TType &T, const RhoType& rho) const {
auto rhotot = std::accumulate(std::begin(rho), std::end(rho), static_cast<typename RhoType::value_type>(0.0));
auto Psiminus = -log(1.0 - b * rhotot);
auto Psiplus = rhotot;
return Psiminus - a / (R * T) * Psiplus;
}
};
void test_vdW_autodiff() {
// Argon + Xenon
std::valarray<double> Tc_K = { 150.687, 289.733 };
std::valarray<double> pc_Pa = { 4863000.0, 5842000.0 };
double T = 298.15;
auto rho = 3.0;
auto R = 1.380649e-23 * 6.02214076e23; ///< Exact value, given by k_B*N_A
auto rhotot = rho;
const std::valarray<double> rhovec = { rhotot / 2, rhotot / 2 };
int i = 0;
double ai = 27.0/64.0*pow(R*Tc_K[i], 2)/pc_Pa[i];
double bi = 1.0/8.0*R*Tc_K[i]/pc_Pa[i];
vdWEOSSimple vdW(ai, bi);
autodiff::dual2nd varT = T;
auto f = [&vdW, &rhovec](auto& T)
{
return autodiff::eval(vdW.alphar(T, rhovec));
};
auto dalphardT = derivative(f, wrt(varT, varT), at(varT));
// autodiff::dual varT = T;
// auto u = vdW.alphar(varT, rhovec);
// auto dalphardT = derivative([&vdW, &rhovec](auto& T) {return vdW.alphar(T, rhovec); }, wrt(varT), at(varT));
}
int main() {
test_vdW_autodiff();
return EXIT_SUCCESS;
}
Your example (thanks for it!) does indeed compile. Only issue is it still gives the first derivative value, even though we specify varT
twice
With the develop
branch? Could you please post here the expected values? I'll check this further.
I'll get the second derivative value calculated with the multicomplex for testing
Try something like this:
auto [u0, ux, uxx] = derivatives(f, wrt(x, x), at(x));
See this example: https://github.com/allanleal/autodiff/blob/develop/examples/forward/example-forward-higher-order-cross-derivatives.cpp
This is a new capability not in master. It's been a while I don't use these derivative functions. I tend to use jacobian
only.
That's a multi-variate function in that example - does the single-variate case differ?
This code:
auto f = [&vdW, &rhovec](auto& T) {
return autodiff::eval(vdW.alphar(T, rhovec));
};
auto [alphar2, dalphardTT, d2alphardT2] = derivative(f, wrt(varT, varT), at(varT));
gives compilation error:
1>C:\Users\ihb\Code\autodiff\test\test_autodiff.cpp(93,57): error C3617: initializers of structured bindings must be of array or non-union class type; type 'double' is not permitted
derivative
should be derivatives
. This also works with single-variable functions.
doh. Indeed derivatives
is not the same as derivative
, and that seems to work. I'll check the numerical value, but so far compiles and runs, which is progress.
Numerical value checks out. Speed also very fast. Few hundred times faster than my current approach I think. Not sure how this will scale to more challenging models.
I have written a C++ library that allows for evaluation of thermodynamic equation of state written in terms of residual Helmholtz energy (only, not derivatives): https://github.com/ianhbell/teqp . Derivatives are taken with complex derivatives for single derivatives, and multicomplex derivatives with higher (and cross) derivatives. This works great with multicomplex differentiation, but it is not very fast, so I wanted to see if automatic differentiation would be faster. The core function needs to accept arbitrary numerical types as template arguments.
The below is a simple test case showing how I would like to use
autodiff
, in analogy to what I did with the complex differentiation tools. The generic model implements the functionalphar
and here I am trying to take the derivative of the first argument, which is scalar, as a first test.Visual studio gives voluminous compilation errors (below). I suspect it has something to do with the fact that the function I am wrapping in the lambda is an instance method, though I admit I haven't the foggiest idea how to workaround this issue because the lambda function approach is usually a success in other cases.