Closed April-Hannah-Lena closed 1 year ago
I think it would actually make sense to simplify sample_adaptive
, and not have it do a bunch of error checks. Instead, the more complicated version could be added to examples/dadras_system.jl
. This could also be used as a tutorial on how to use the SampledBoxMap
constructor. If you prefer this, let me know and I'll push the changes from my local computer, with these changes.
Yes, I would also prefer the simpler version!
When I run
using GAIO
# the Lorenz system
const σ, ρ, β = 10.0, 28.0, 0.4
v((x,y,z)) = (σ*(y-x), ρ*x-y-x*z, x*y-β*z)
f(x) = rk4_flow_map(v, x)
center, radius = (0,0,25), (30,30,30)
P = BoxPartition(Box(center, radius), (100,100,100))
F = BoxMap(f, P)
Fa = AdaptiveBoxMap(f, P)
x = (sqrt(β*(ρ-1)), sqrt(β*(ρ-1)), ρ-1) # equilibrium
@time W = unstable_set!(F, P[x])
@time Wa = unstable_set!(Fa, P[x])
(the last two lines twice ...) I get
0.189365 seconds (8.79 k allocations: 5.513 MiB)
12786-element BoxSet in 100 x 100 x 100 BoxPartition
0.327658 seconds (6.12 M allocations: 432.249 MiB, 46.64% gc time)
34429-element BoxSet in 100 x 100 x 100 BoxPartition
Is there a reason why the adaptive version does so many allocations?
Is there a reason why the adaptive version does so many allocations?
I think it's because in each iteration the matrix L
needs to be reallocated. This could be avoided with a bit of a workaround, I'll get on that.
Is there a reason why the adaptive version does so many allocations?
It turns out the issue lies with julia's svd algorithm. The line
_, σ, Vt = svd(L)
causes all the allocations. Unfortuantely, julia does not offer an in-place svd function (svd! only saves some between-steps in-place, but not the whole decomposition). Anything short of completely rewriting the gesdd!
function in LinearAlgebra.LAPACK
wouldn't do much.
I have however tracked down and removed all allocations not related to svd
, and simplified the entire sample_adaptive
function. The complicated version is moved to dadras_system.jl
.
Does it really help top preallocate n
and h
? Also, can we extract the computation of the Lipschitz constant into a separate function?
Does it really help top preallocate
n
andh
?
The preallocation seemed to make a small difference, though it was negligible, so I removed it.
Also, can we extract the computation of the Lipschitz constant into a separate function?
It is now in the function approx_lipschitz
.
As for a SIMD accelerated version, the overhead required to
simd
, seems to vastly decrease performance. This is particularly true for systems that require <32 test points most of the time... so basically all systems I've tested. I'm not sure if it's worth toiling over a SIMD version, when the whole point is to reduce the number of test points needed in the first place.
Ok, nice. Yes, let's forget about an SIMD accelerated version of the adaptive sampling ... keep it simple ...
Here is the version of
sample_adaptive
which uses test points to approximate the Lipschitz constant. I'll add the SIMD accelerated version in a few hours, but the no-acceleration version is ready. It can be tested inexamples/unstable_manifold.jl
.