Closed martin-abrudsky closed 1 year ago
Can you try the following:
sol= solve(Ensemble_Problem,
Tsit5(),
EnsembleGPUArray(),
dt=0.01,
trajectories=trajectories,
#batch_size=10_000,
callback = gpu_cb,
adaptive=false,
save_everystep=false,
merge_callbacks = true
)
Hello, I tried your recommendation but it gives me this error
Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?79e64a19-8dad-4456-bf40-7452538e8508)
InvalidIRError: compiling kernel #gpu_continuous_condition_kernel(KernelAbstractions.CompilerMetadata{KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.DynamicCheck, Nothing, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, KernelAbstractions.NDIteration.NDRange{1, KernelAbstractions.NDIteration.DynamicSize, KernelAbstractions.NDIteration.DynamicSize, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}}}, typeof(condition), CuDeviceVector{Float64, 1}, CuDeviceMatrix{Float64, 1}, Float64, CuDeviceMatrix{Float64, 1}) resulted in invalid LLVM IR
Reason: unsupported dynamic function invocation (call to convert)
Stacktrace:
[1] setindex!
@ ~/.julia/packages/CUDA/DfvRa/src/device/array.jl:194
[2] macro expansion
@ ~/.julia/packages/DiffEqGPU/CiiCq/src/DiffEqGPU.jl:63
[3] gpu_continuous_condition_kernel
@ ~/.julia/packages/KernelAbstractions/C8flJ/src/macros.jl:81
[4] gpu_continuous_condition_kernel
@ ./none:0
Reason: unsupported dynamic function invocation (call to getindex)
Stacktrace:
[1] condition
@ ~/FAMAF/Beca_CIN_Trabajo_Final/skymap/GPU_Julia/pot_central_GPU_Float64.ipynb:21
[2] macro expansion
@ ~/.julia/packages/DiffEqGPU/CiiCq/src/DiffEqGPU.jl:63
[3] gpu_continuous_condition_kernel
@ ~/.julia/packages/KernelAbstractions/C8flJ/src/macros.jl:81
[4] gpu_continuous_condition_kernel
@ ./none:0
Reason: unsupported dynamic function invocation (call to -)
Stacktrace:
[1] condition
@ ~/FAMAF/Beca_CIN_Trabajo_Final/skymap/GPU_Julia/pot_central_GPU_Float64.ipynb:21
...
@ ~/.julia/packages/CUDA/DfvRa/src/utilities.jl:25 [inlined]
[33] top-level scope
@ ~/.julia/packages/CUDA/DfvRa/src/pool.jl:490 [inlined]
[34] top-level scope
@ ~/FAMAF/Beca_CIN_Trabajo_Final/skymap/GPU_Julia/pot_central_GPU_Float64.ipynb:0
I think that issue arises because you're using global variables inside condition
, which doesn't make it the condition
GPU-compatible. The support of callbacks is limited. Hence, use constants directly within the condition
.
However, I saw that your affect!
is a teminate!
which is not supported yet #43. We're working on it.
cc @ChrisRackauckas
ok. terminate!
is also not supported by EnsembleGPUkernel()
. I ask because I tried to solve this problem using this Ensemble, for which I first tried the example Callbacks with EnsembleGPUKernel found in the readme, however I got the following error
Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?7ea01c52-ff07-4982-8627-c8c01801da66)
GPU compilation of kernel #tsit5_kernel(CUDA.CuDeviceVector{ODEProblem{SVector{1, Float32}, Tuple{Float32, Float32}, false, SciMLBase.NullParameters, ODEFunction{false, SciMLBase.AutoSpecialize, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, 1}, CUDA.CuDeviceMatrix{SVector{1, Float32}, 1}, CUDA.CuDeviceMatrix{Float32, 1}, Float32, CallbackSet{Tuple{}, Tuple{DiscreteCallback{typeof(condition), typeof(affect!), typeof(SciMLBase.INITIALIZE_DEFAULT), typeof(SciMLBase.FINALIZE_DEFAULT)}}}, CUDA.CuDeviceVector{Float32, 1}, Int64, Nothing, Val{true}) failed
KernelError: passing and using non-bitstype argument
Argument 6 to your kernel function is of type CallbackSet{Tuple{}, Tuple{DiscreteCallback{typeof(condition), typeof(affect!), typeof(SciMLBase.INITIALIZE_DEFAULT), typeof(SciMLBase.FINALIZE_DEFAULT)}}}, which is not isbits:
.discrete_callbacks is of type Tuple{DiscreteCallback{typeof(condition), typeof(affect!), typeof(SciMLBase.INITIALIZE_DEFAULT), typeof(SciMLBase.FINALIZE_DEFAULT)}} which is not isbits.
.1 is of type DiscreteCallback{typeof(condition), typeof(affect!), typeof(SciMLBase.INITIALIZE_DEFAULT), typeof(SciMLBase.FINALIZE_DEFAULT)} which is not isbits.
.save_positions is of type BitVector which is not isbits.
.chunks is of type Vector{UInt64} which is not isbits.
I get the same error in my code
With EnsembleGPUArray, that's a won't fix. We should just throw an error for this, and tell people it's required that they use the kernel generating methods.
Yes, I'll make a PR on it. A lot of stuff needs to be on docs as well.
With the new docs released and terminate!
support in EnsembleGPUKernel
, can you try this again?
It's now documented and all, so closing. EnsembleGPUKernel is the answer, as per the docs.
Hello, I am solving a central potential problem with EnsembleGPUArray() for multiple initial conditions, when I add the ContinuousCallback the first 800 trajectories ignore the affect!(integrator) and the last 200 do the cut correctly. If I solve for 10_000 trajectories, the first 8000 trajectories ignore the cut and the last 2000 do it correctly, similarly for 1_000_000 trajectories. the code is the following
From already thank you very much