Closed Ziaeemehr closed 3 years ago
using generator function slightly decreased the compilation time a memory usage (287 seconds and 2.3GB).
A few suggestions or remarks:
Do you know whether the memory issues arise during or before compiling (compile_C
)?
If before: Have you tried compiling more Lyapunov exponents with generator functions and see whether the memory consumption is increased?
If during: Have you tried using Clang as a compiler? It tends to respect chunking better than GCC.
If during: Did you try compiling without optimisation?
from jitcxde_common import DEFAULT_COMPILE_ARGS
…
I.compile_C( extra_compile_args = DEFAULT_COMPILE_ARGS + ["--my-O0"] )
Are you really sure you need 30–50 Lyapunov exponents? I fail to see how they can provide any relevant information and you will very likely run into underflow problems on account of some of these exponents being very low during runtime.
compile_C
.-O0
optimization flag all reduced the memory usage and I did not get memory error.-O0
did not.The program calculate all LEs and is slow for large networks. The Lyapunov dimension here are different.
We can compare the largest LEs as a more convenient measure but I am not sure the largest LEs is always a good replacement of Lyapunov dimension index.
Using generator function, compiling with clang and using -O0 optimization flag all reduced the memory usage and I did not get memory error.
Just being curious: Did each of these actions reduced the memory usage or did you need to do all of them at once?
We can compare the largest LEs as a more convenient measure but I am not sure the largest LEs is always a good replacement of Lyapunov dimension index.
If you just want an indicator for chaos, the largest Lyapunov exponent is certainly a more robust measure¹. Otherwise, it depends on your application. I am not aware of a good interpretation or other use of the Lyapunov dimension for such high-dimensional systems, but that doesn’t mean that none exists.
¹ More precisely: Be sure to discard transients. Then get a reasonable error margin for largest Lyapunov exponent, e.g., the standard error of the mean of local Lyapunov exponents at least one oscillation apart. Also, the second-largest Lyapunov exponent should be zero or larger.
Thank you.
Just being curious: Did each of these actions reduced the memory usage or did you need to do all of them at once?
clang and -O0
are necessary and generator function helps. It seems gcc
is not helpful for reducing the memory even with -O0
. I am using gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
and clang version 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
.
I think a convergence test helps here after dropping the transient behavior. The LEs are recorded to a buffer for m last steps. Then the standard deviation of each LE is calculated as the buffer full. If standard deviation of all LEs be smaller than a threshold it means the LEs converged to a value and don't change anymore. So the program terminate. Something like this:
def convergence_test(buffers, threshold, verbocity=False):
# BUFFER_LENGTH * n_lyap
info = 0 # (0 and 1 for not converge and converge, respectively)
lyap = np.mean(buffers, axis=0)
st_dev = np.std(buffers, axis=0)
# if verbocity:
# print("std is : {:.6f}".format(np.max(st_dev)))
if (st_dev < threshold).all():
info = 1
if verbocity:
print("Lyapunov exponents converged. {:.6f}".format(st_dev[0]))
return lyap, info
info ==1
terminate the program and return the LEs.
The standard deviation of all LEs smaller than a threshold is hard condition, I guess for large networks probably the convergence of a few largest LEs is enough.
best,
I have almost the same problem as #21 . I am using
jitcode_lyap
for 500 nodes, Kuramoto model.I am calculating the Lyapunov dimension (the number of Lyaponov exponents that their accumulation is positive). I am running the code on a Linux Ubuntu 18.04 machine with 32 GB RAM.
It can be compiled with
n_lyap=1
in 300 seconds (~2.5 GB of RAM) but I need to consider about 30 to 50 exponents that make the code crash.Sorry the code is a bit long.
Here is my try: