SciML / AutoOptimize.jl

Automatic optimization and parallelization for Scientific Machine Learning (SciML)
https://benchmarks.sciml.ai/
MIT License
78 stars 6 forks source link

Utilize AutoPreallocation.jl #7

Open ChrisRackauckas opened 4 years ago

ChrisRackauckas commented 4 years ago

https://github.com/oxinabox/AutoPreallocation.jl

ChrisRackauckas commented 4 years ago

@oxinabox is AutoPreallocation safe for ForwardDiff, i.e. does it use DualCaches?

oxinabox commented 4 years ago

AutoPreallocation hijacks the construction of new Arrays, to reuse ones that it has already allocated earlier. It kind of negates the need to use DualCache which exist to do the same thing as a special case, right? But it should work find to use on top of that, since it will see the array being created in the DualCache and then capture that one only.

The main problem with AutoPreallocation.jl is Cassette related inferrence failures, which can some times dominate the costs saved by avoiding the allocations

ChrisRackauckas commented 4 years ago

It kind of negates the need to use DualCache which exist to do the same thing as a special case, right?

If the user runs AutoPreallocation and then runs it with dual numbers, it'll automatically use different caches than with Float64, and that's done via type inference?

oxinabox commented 4 years ago

If the user runs AutoPreallocation and then runs it with dual numbers, it'll automatically use different caches than with Float64, and that's done via type inference?

I'm not sure i understand. Nothing in AutoPreallocation has to do with type inference; except that because AutoPreallocation uses Cassette sometimes type-inference breaks, causing things to act type-unstable. Standard problem with Cassette.

But in general if your code allocates differently between two calls, then AutoPreallocation will fail. If it allocates the same it will work


Probably just read these two short files and you will see how Autopreallocation.jl works