SciML / DiffEqGPU.jl

GPU-acceleration routines for DifferentialEquations.jl and the broader SciML scientific machine learning ecosystem
https://docs.sciml.ai/DiffEqGPU/stable/
MIT License
285 stars 29 forks source link

Support terminate! #43

Closed ChrisRackauckas closed 1 year ago

ChrisRackauckas commented 4 years ago

We will need to think about this one. Actually terminating will cause the others not to finish. I think that terminate should just set an array to remember the time, and then we truncate back to the terminated times after solving.

aml5600 commented 4 years ago

Could this support synchronizing any arbitrary callback?

ChrisRackauckas commented 4 years ago

it could, but it would need to make sure all want to terminate.

aml5600 commented 4 years ago

sorry sorry, synchronize any "affect" of a callback, instead of just terminating.

ChrisRackauckas commented 4 years ago

what do you mean?

aml5600 commented 4 years ago

In the case of a possibly hybrid system, being run on a GPU, it makes sense to synchronize the instances at the switching point so only one subsystem is executed at a time. Although I guess this would also require more than one generated kernel...obviously this can be circumvented with du = (use_a)*sys_a(u) + (!use_a)*sys_b(u).

Maybe this is a too targeted application though.

ChrisRackauckas commented 4 years ago

Running both is probably much faster on the GPU, and this is unrelated to this issue.

ChrisRackauckas commented 2 years ago

Note it's doing a special callback implementation in https://github.com/SciML/DiffEqGPU.jl/pull/174

ChrisRackauckas commented 1 year ago

Supported in EnsembleGPUKernel and documented.