Open RainerEngelken opened 6 years ago
This is also good against climate change, in case you run lots of simulation or alternatively increases the number of simulations you can run using the same number of CPU hours ;-)
Somebody give this man a medal, please!!!
@RainerEngelken can you give a bit more context here?
computes the condition number of the resulting Q-matrix
What is a "condition number" ?
until an acceptable value is reached
What value can pass as "acceptable"? How can we formulate this condition a bit more rigorously?
I agree with you that this is a good suggestion and will have performance gains.
From an API perspective, this seems very easy: It can happen during Ttr
, the keyword argument that transiently evolves the system.
In our case, the condition number quantifies the ratio between the first and the kth singular value of Q or equally R, when you are calculating k Lyapunov exponents. https://en.wikipedia.org/wiki/Condition_number Basically, it measures how close to singular Q is.
The acceptable condition number depends on how many digits of accuracy you are willing to sacrifice. I guess for our case a condition number between 10^2 and 10^5 is acceptable, this corresponds to ~ 2-5 digits of accuracy lost during the QR-decomposition. Given that Lyapunov exponents anyway only converge $\propto \frac{1}{\sqrt{N}}$, where $N$ is the number of iterations or time steps, I guess that a loss of the last 2 to 5 digits is acceptable. I have examined this only rigorously for my favorite kinds of systems (spiking and firing rate networks), it might be good to play around with the convergence of some other dynamical systems before hard coding such a number ;-)
Currently, dt is a default value that can manually be changed. It would be great to have an initial short run of a system that iteratively tries different dt and computes the condition number of the resulting Q-matrix, until an acceptable value is reached. This could e.g. be done by iteratively multiplying dt This avoids loss of numerical precision (in case dt is too large) and avoids unnecessary computation (in case dt is too small). This is also good against climate change, in case you run lots of simulation or alternatively increases the number of simulations you can run using the same number of CPU hours ;-)