Closed gdalle closed 6 months ago
@Vaibhavdixit02 let's maybe wait until @brianguenter weighs in on the content of the new struct before registering v0.2.8?
@Vaibhavdixit02 @gdalle I'm happy to take a look. Which struct are you referring to?
Currently FastDifferentiation has three distinct phases that are visible to the end user: derivative graph analysis (to the user this looks like symbolic derivative calculation), code generation, and code execution.
I plan to make a new version that will collapse these into one. The system will do something similar to trace compilation.
In this system there might be parameters associated with the number of conditionals the compiler will trace through, or the number of traces that will be cached. How would that fit into your framework? Would those parameters go in this struct?
Yes, but if those don't exist for now we can leave the struct empty at the moment
Should code generation parameters go in this struct? These are the code generation parameters currently available:
How about sparsity? Should this be specified here as well?
Do you mean like a boolean or are there choices for how the sparsity detection is done?
Also note that AutoSparseFastDifferentiation exists
I think for sparsity the easiest is to split the backend in two, sparse and not sparse, as was done for other backends.
For in-place vs out-of-place, in the DifferentiationInterface framework the same backend object can be used in both ways, so I would leave it out.
For incrementation vs zeroing, I think the default will be zeroing so again I would leave it out.
Checklist
Additional context
FastDifferentiation.jl is an efficient symbolic backend which DifferentiationInterface.jl can use
cc @brianguenter, do you think there should be other parameters here to specify how the backend is configured? similar to e.g. chunk size in ForwardDiff or compiled mode toggle in ReverseDiff