Higher Order Reverse Derivatives Efficiently - Automatic Differentiation library based on the paper "Provably correct, asymptotically efficient, higher-order reverse-mode automatic differentiation"
BSD 3-Clause "New" or "Revised" License
34
stars
6
forks
source link
Define a Tensor instance that does the fast forward derivative computation #94
That would mimic what ADModeDerivative did previously, but in a separated code path, taking advantage of the flexibility of the Tensor class.
Until this is done, we have the slow derivative, computed from delta expressions in the main code path. IIRC the speed difference wasn't that big. Probably the same tensor operations are performed, so the big cost is only the delta expressions taking up RAM and cache.
That would mimic what ADModeDerivative did previously, but in a separated code path, taking advantage of the flexibility of the Tensor class.
Until this is done, we have the slow derivative, computed from delta expressions in the main code path. IIRC the speed difference wasn't that big. Probably the same tensor operations are performed, so the big cost is only the delta expressions taking up RAM and cache.