LindenParkesLab / nctpy

A toolbox for implementing Network Control Theory analyses in python
MIT License
57 stars 14 forks source link

Why do minimum_energy and optimal_energy produce different output sizes? #19

Closed JohannesWiesner closed 3 years ago

JohannesWiesner commented 3 years ago

Here in Mannheim, we were wondering about the different implementation of 'time' concerning minimum_energy and optimal_energy (Note that I am still using v0.0.4).

Both functions differently implement the input parameter T. Whereas minimum_energy defines a fixed number of nStep = 1000 steps from 0 to T (resulting in bigger step sizes with a bigger T and always outputting a 1001 big array), optimal_energy defines a fixed step size of STEP = 0.001 (resulting in a higher number of steps with bigger T and outputting a 1000*T+1 big array). Is there a specific reason why this is implemented differently ( I guess there is)? Generally, we would be interested in running both minimum_energy and optimal_energy on the same data while ensuring some sort of comparability of the different x and u arrays produced by minimum_energy and optimal_energy

P.S.: I apologize in advance for my fuzzy language. Definitely lacking the math here.

Here's some example code:

# -*- coding: utf-8 -*-
"""
Note that we are using network_control v0.0.4
"""

import numpy as np
from network_control.utils import matrix_normalization
from network_control.energies import minimum_energy,optimal_energy

# initialize inputs
np.random.seed(28)
A = np.random.rand(5,5)
A = matrix_normalization(A, c=1)
x0 = np.random.rand(5,1)
xf = np.random.rand(5,1)
B = np.eye(5)
S = np.eye(5)
T = 20
rho = 1

# calculate minimum and optimal energy
x_min,u_mim,n_err_min = minimum_energy(A, T, B, x0, xf)
x_opt,u_opt,n_err_opt = optimal_energy(A, T, B, x0, xf, rho, S)
jastiso commented 3 years ago

This is a great point! There actually is not a specific reason why these are implemented differently, it was just the preference of the people who wrote the original function. Right now both selections are arbitrary, but we're working on implementing a more principled method that we hope to include in v0.0.5 when its up on pypi

JohannesWiesner commented 3 years ago

Hi @jastiso,

okay, thanks for letting me know! I guess it would make sense to allow both the definition of a fixed number of steps (resulting in different step sizes) OR a fixed step size (resulting in different numbers of steps) for both optimal_input and minimum_input? One problem that I discovered today is that the current implementation in optimal_input does NOT necessarily return a 1000*T+1 big array. This seems to be due to floating-point errors (sometimes the array is 1000*T+2 big). For example, set T=4 and the array will be 4002 big (at least on my machine) but e.g. T=3.2 gives you (as expected) a 3201 big array:

import numpy as np
from network_control.utils import matrix_normalization
from network_control.energies import optimal_input

# initialize inputs
np.random.seed(28)
A = np.random.rand(5,5)
A = matrix_normalization(A,c=1,version='continuous')
x0 = np.random.rand(5,1)
xf = np.random.rand(5,1)
B = np.eye(5)
S = np.eye(5)
T = 4
rho = 1

# compute optimal energy
x_opt,u_opt,n_err_opt = optimal_input(A,T,B,x0,xf,rho,S)

This again leads to problems when one wants to run multiple state-to-state-transitions where the output should be stored in an array with a fixed shape. I thought I could just create a branch and replace:

STEP = 0.001
t = np.arange(0,(T+STEP),STEP)

with:

# Prepare Simulation
nStep=1000
t = np.linspace(0,T,nStep+1)

in optimal_input, but STEP is needed a few lines below.. So I guess right now, I will just replace those lines with a list comprehension wich allows for different output shapes.

jk6294 commented 3 years ago

Hi @JohannesWiesner,

Thanks for raising this point! We have modified all of the code to have a common step size, 0.001. The reason why the step size needs to be small and consistent is because the code will produce significant numerical integration errors otherwise. For example, by setting the time horizon T = 100, and only having nStep=1000, every 1 unit of simulation time will be sampled 10 times, which will cause integration errors.

The new version maintains STEP = 0.001, and should omit floating point errors in the following line of code:

t = np.arange(0,(T+STEP/2),STEP)