Closed JohannesWiesner closed 3 years ago
This is a great point! There actually is not a specific reason why these are implemented differently, it was just the preference of the people who wrote the original function. Right now both selections are arbitrary, but we're working on implementing a more principled method that we hope to include in v0.0.5 when its up on pypi
Hi @jastiso,
okay, thanks for letting me know! I guess it would make sense to allow both the definition of a fixed number of steps (resulting in different step sizes) OR a fixed step size (resulting in different numbers of steps) for both optimal_input
and minimum_input
? One problem that I discovered today is that the current implementation in optimal_input
does NOT necessarily return a 1000*T+1
big array. This seems to be due to floating-point errors (sometimes the array is 1000*T+2
big). For example, set T=4
and the array will be 4002
big (at least on my machine) but e.g. T=3.2
gives you (as expected) a 3201
big array:
import numpy as np
from network_control.utils import matrix_normalization
from network_control.energies import optimal_input
# initialize inputs
np.random.seed(28)
A = np.random.rand(5,5)
A = matrix_normalization(A,c=1,version='continuous')
x0 = np.random.rand(5,1)
xf = np.random.rand(5,1)
B = np.eye(5)
S = np.eye(5)
T = 4
rho = 1
# compute optimal energy
x_opt,u_opt,n_err_opt = optimal_input(A,T,B,x0,xf,rho,S)
This again leads to problems when one wants to run multiple state-to-state-transitions where the output should be stored in an array with a fixed shape. I thought I could just create a branch and replace:
STEP = 0.001
t = np.arange(0,(T+STEP),STEP)
with:
# Prepare Simulation
nStep=1000
t = np.linspace(0,T,nStep+1)
in optimal_input
, but STEP
is needed a few lines below.. So I guess right now, I will just replace those lines with a list comprehension wich allows for different output shapes.
Hi @JohannesWiesner,
Thanks for raising this point! We have modified all of the code to have a common step size, 0.001. The reason why the step size needs to be small and consistent is because the code will produce significant numerical integration errors otherwise. For example, by setting the time horizon T = 100, and only having nStep=1000, every 1 unit of simulation time will be sampled 10 times, which will cause integration errors.
The new version maintains STEP = 0.001, and should omit floating point errors in the following line of code:
t = np.arange(0,(T+STEP/2),STEP)
Here in Mannheim, we were wondering about the different implementation of 'time' concerning
minimum_energy
andoptimal_energy
(Note that I am still using v0.0.4).Both functions differently implement the input parameter
T
. Whereasminimum_energy
defines a fixed number ofnStep = 1000
steps from 0 toT
(resulting in bigger step sizes with a biggerT
and always outputting a1001
big array),optimal_energy
defines a fixed step size ofSTEP = 0.001
(resulting in a higher number of steps with biggerT
and outputting a1000*T+1
big array). Is there a specific reason why this is implemented differently ( I guess there is)? Generally, we would be interested in running bothminimum_energy
andoptimal_energy
on the same data while ensuring some sort of comparability of the differentx
andu
arrays produced byminimum_energy
andoptimal_energy
P.S.: I apologize in advance for my fuzzy language. Definitely lacking the math here.
Here's some example code: