Closed ricfrod closed 1 year ago
The problem is really just that RuntimeGeneratedFunctions are not compatible with CUDA.jl compilation. @vchuravy is this possible?
Not in it's current state. The expr
field is prohibitive. But maybe you could construct a RGF that has the right cache tag etc and doesn't store the expr.
Is there any way to do something like, recognize it's in this context and instead eval and invoke the eval'd one for the compilation?
You would have to do that before passing it off to the GPU stack.
I'm wondering if the Cassette stack stuff could see a RuntimeGeneratedFunction and replace it with an eval and then grab the evaluated value, naming it correctly in some namespace to know you don't have a collision?
Try EnsembleGPUKernel
. It probably works with MTK-generated functions.
This is solved on the most recent ModelingToolkit and Symbolics ecosystems.
Hello all! This is related to this post where I was attempting to recreate the lorentz equations example in the DiffEqGPU.jl but instead of providing numerical functions I wanted to use ModelingToolkit.jl to handle the jacobian and time gradient.
Here's how to reproduce the error:
As @ChrisRackauckas pointed out: