SciML / DiffEqCallbacks.jl

A library of useful callbacks for hybrid scientific machine learning (SciML) with augmented differential equation solvers
https://docs.sciml.ai/DiffEqCallbacks/stable/
Other
85 stars 44 forks source link

Improve caching and dispatch of `LinearizingSavingCallback` #195

Open staticfloat opened 5 months ago

staticfloat commented 5 months ago

This adds a new type, LinearizingSavingCallbackCache and some sub-types to allow for efficient re-use of memory as the callback executes over the course of a solve, as well as re-use of that memory in future solves when operating on a large ensemble simulation.

The top-level LinearizingSavingCallbackCache creates thread-safe cache pool objects that are then used to acquire thread-unsafe cache pool objects to be used within a single solve. Those thread-unsafe cache pool objects can then be released and acquired anew by the next solve. The thread-unsafe pool objects allow for acquisition of pieces of memory such as temporary u vectors (the recusrive nature of the LinearizingSavingCallback means that we must allocate unknown numbers of temporary u vectors) and chunks of u blocks that are then compacted into a single large matrix in the finalize method of the callback. All these pieces of memory are stored within that set of thread-unsafe caches, and these are released back to the top-level thread-safe cache pool, for the next solve to acquire and make use of those pieces of memory in the cache pool.

Using these techniques, the solve time of a large ensemble simulation with low per-simulation computation has reduced dramatically. The simulation solves a butterworth 3rd-order filter circuit over a certain timespan, swept across different simulus frequencies and circuit parameters. The parameter sweep results in a 13500-element ensemble simulation, that when run with 8 threads on a M1 Pro takes:

48.364827 seconds (625.86 M allocations: 19.472 GiB, 41.81% gc time, 0.17% compilation time)

Now, after these caching optimizations, we solve the same ensemble in:

13.208123 seconds (166.76 M allocations: 7.621 GiB, 22.21% gc time, 0.61% compilation time)

As a side note, the size requirements of the raw linearized solution data itself is 1.04 GB. In general, we expect to allocate somewhere between 2-3x the final output data to account for temporaries and inefficient sharing, so while there is still some more work to be done, this gets us significantly closer to minimal overhead.

Checklist

staticfloat commented 5 months ago

@ChrisRackauckas I don't know how common it is for callbacks or other pieces of code in the SciML universe to have memory requirements that cannot be known beforehand (such as is the case here, where the linearizer is recursive and you need a few u vectors at each level of the recursion). This CachePool technique (especially the one where you have a single thread-safe cache at the top level, and each ensemble solve can then get a thread-unsafe cache pool for its own use, and those can be re-used from solution to solution) seems to work quite well. There's still a little bit more blood to be squeezed from this stone (the allocation profiler is still showing some things in my code, and in IDA, but much, much less) but this takes a large step forward. If this CachePool stuff would be useful elsewhere, feel free to move it somewhere more general.

One implementation note; I've avoided adding a dependency on Sundials, but I have Sundials-specific workarounds for the fact that you need to allocate NVector's instead of Vector{Float64}'s for your u vectors if your solver is IDA. I think a cleaner implementation would use a package extension on Sundials, but I haven't had the chance to put that in yet. If you think I should do that, feel free to hold off on merging this until I get around to it.

staticfloat commented 5 months ago

One quick question: Since we've noticed before that the CallbackSets themselves are not threadsafe, what's the need for thread safety here?

The need is that we are constructing one LinearizingSavingCallbackCache, and sharing that among all threads in an ensemble simulation. See example here. This reduces the number of total allocations from being linear in the number of ensemble elements, to being linear in the number of threads.

This means that each thread will be hitting the ils_cache and lsc_cache properties in parallel, so we need to lock access around the pool otherwise you'll get corruption. The benefit here is that each thread only needs to lock once, to get its own thread-unsafe cache pool which it uses in isolation, then releases that thread-unsafe cache pool back to the globally-shared thread-safe cache pool, and the next time a thread starts up a new solve, it can acquire a cache pool that already has a bunch of memory allocated for it.

staticfloat commented 5 months ago

I've decided to switch this to draft because I really don't like workarounds so I'm just going to write the Sundials extension and be done with it.

staticfloat commented 5 months ago

Aaaand un-drafted; this is now using a package extension and makes me happy again.

staticfloat commented 5 months ago

I don't understand why Aqua thinks I have an unbound type parameter in the constructor for the IndependentlyLinearizedSolution type.

staticfloat commented 5 months ago

The Aqua behavior is a bug I believe: https://github.com/JuliaTesting/Aqua.jl/issues/265

ChrisRackauckas commented 5 months ago

bump