Closed mforets closed 6 years ago
It is not completely inconsistent: We interpret it after the projection on the respective blocks.
(I know that you knew that).
I agree that it can be confusing. You would expect [0, 3]
in this case.
I see two possible changes:
[0, 3]
and compute, using :blocks
, [0, 1]
when you parse the options; then continue internally as before.:plot_vars
is used and do the computation there.The good thing with 1. is that we do not have to change existing code, that we have to compute [0, 1]
only once at a central place, and that we do not need the :blocks
argument later.
The good thing with 2. is that there is no inconsistency/confusion internally, e.g., if you debug.
inside the function validate_solver_options_and_add_default_values!
, can we produce the default value for plot_vars
by checking what has been given as blocks
? this is probably your option 1 above?
If I understand you correctly, you want to convert the current default value [0, 1]
to [0, 3]
from blocks == [2, 5]
.
Then the computation should be [0, blocks[1]*2-1]
, right?
yes, that'll work.
so, my proposal is:
plot_vars
, eg. the one you suggested, butapply_projection
by default is blocks
is given as an input.Why should we apply_projection
if blocks
is not given? I find this confusing.
I understand it if we compute blocks
intelligently from plot_vars
in this case, but currently we do not.
Why should we apply_projection if blocks is not given? I find this confusing.
it's needed to add the time variable for example.
it's needed to add the time variable for example.
I do not follow.
Let's approach this in terms of use cases. Let c = ContinuousSystem(A, X0, U)
.
Here is a draft:
Minimum number of options:s = solve(c, :T=>1.)
: this produces a Vector{HPolygon}
. Internally we did apply project
to add the time dimension, where plot_vars=[0, 1]
. Advantage: plot(s)
"just works", i.e. no need for getting into advanced options (yet).
Say that variables [1, 5]
are needed. Then we can make that s = solve(c, :plot_vars=>[1, 5])
followed by a plot(s)
works. What do you think? Internally, we shall deduce the required blocks
for the variables of interest, and we also projected them. This also produces a Vector{HPolygon}
.
I want to have more control on the internals, so i can do a s = solve(c, :blocks=>[1, 3], :apply_projection=>false)
. This produces a Vector{CartesianProductArray}
. Then, at a later stage, i can do a sp = project(s, :plot_vars=>[1, 5])
, and then a plot(sp)
.
Edit: the motivation for the 2nd use case is that one doesn't need to understand the implementation (blocks decomposition) to get the result.
I see, yes, that is a good suggestion.
Then in this issue we should also do the inference of blocks
, I like that.
cc: @roccaa
@mforets: Is my understanding correct that we always want to apply_projection
by default?
I tried to satisfy these examples. Here are the problems:
T
.project
with a ReachSolution
argument, so you have to call with sp = project(s.Xk, :plot_vars=>[1, 5])
.
If you want to support the other, please add a respective constructor.KeyError: key :ε not found
. I have not investigated further.Discussion can continue in PR#25.
thanks for the response. i changed my mind in a couple of things. below is a proposal to get started:
The function solve
provides the interface to the algorithm backends that actually solve the reachability problem. In general, you will be intersted in a subset of the state variables only, be it for visualization purposes or for storing the results for subsequent computations. The variables of interest can be specified with the vars
argument. This is illustrated by an example:
# produce a ten-dimensional random system
julia> A = randn(10, 10)
julia> X0 = BallInf(ones(10), 0.1)
julia> p = ContinuousSystem(A, X0)
# solve the reachability problem for 100 steps in the time horizon 1.0
julia> s = solve(p, :T=>1.0, :N=>100, :vars=>(0, 1))
Here we specify that we are interested in the time varible, thus the index 0
, and the variable x_1
.
We could as well compute the reach set for the variables t
, x_1
and x_3
by adding more entries to the tuple, i.e. with the command:
julia> s = solve(p, :T=>1.0, :N=>100, :vars=>(0, 1, 3))
By default, if you don't specify any variables then the whole state space is computed. Hence,
julia> s = solve(p, :T=>1.0, :N=>100)
produces the same result as
julia> s = solve(p, :T=>1.0, :N=>100, :vars=>(1..10))
In the following sections we see how to project and plot reach sets.
Option | Default | Aliases | Type | Description | Example |
---|---|---|---|---|---|
vars | (1:n) | Tuple | :vars=>(0, 3) |
||
N | :N=>1000 |
||||
δ | :δ=> 0.1 |
||||
T | :T=> 1.0 |
||||
mode | :mode=>"reach" |
Option | Default | Aliases | Type | Description | Example |
---|---|---|---|---|---|
n |
Option | Default | Aliases | Type | Description | Example |
---|---|---|---|---|---|
assume_sparse | :assume_sparse=>true |
||||
blocks | :blocks=> [2, 5] |
||||
apply_projection | :apply_projection=>true |
||||
ε | :ε=>1e-3 |
||||
coordinate_transformation | |||||
algorithm | :algorithm =>"explicit" |
||||
iterative_refinement | :iterative_refinement=>false |
||||
assume_homogeneous | :assume_homogeneous=>false |
||||
lazy_expm | :lazy_expm=>false |
||||
approx_model | :approx_model=>"forward" |
||||
projection_matrix | :projection_matrix= nothing |
||||
pade_expm | :pade_expm=>false |
||||
lazy_X0 | :lazy_X0=>false |
This is only evaulated if :mode=>"check"
.
Option | Default | Aliases | Type | Description | Example |
---|---|---|---|---|---|
property | nothing | :property=>Property([2., -3.], 450.8) |
cccccc
Finally,
julia> s = solve(p, :T=>1.0, :N=>100, :vars=>(0, 1))
julia> s.options.dict
..
.
.
.
.
.
.
Let me recapitulate:
plot_vars
to vars
(should we still support the alias output_variables
?) 👍plot_vars
from array to tuple - why?Questions:
vars = (0,3)
then we return a 1D result. Plotting cannot understand the index 3.change type of plot_vars from array to tuple - why?
this and other little proposed changes are because i'm always looking how this is done in the mature Julia package DifferentialEquations. not only because other users will probably be familiar with it (if you need to make eg. simulations), but also in this package if we support some form of simulations backend. that's why seemed reasonable to call it vars
instead of plot_vars
.
This will probably require some changes because at some places we assume that there are exactly two (probably for plotting and maybe for projection as well).
indeed.
How do we specify which variables to plot?
the approach that we are taking is that plot recipes doesn't know about projection. just receives a vector of HPolygon
(or an abstract LazySet
, but always in 2D). hence, the plot recipe only needs to receive the label names.
Plotting cannot understand the index 3.
this is the purpose of the xguide
and yguide
things, see plot_recipes.jl
the function plot_sol(sol::ReachSolution{HPolygon};
should we still support the alias output_variables?
i'm not sure about that part of the code. maybe later, we can adapt it a little bit. we should be able to parse the standard LTI system x' = Ax + Bu
, y = Cx + Du
, where the "output variables" are y
.
that's why seemed reasonable to call it vars instead of plot_vars
My question was: Why should we change the type? I am totally fine with changing the name.
yes, i think so. looking in this wiki it says:
A tuple is an ordered sequence of elements, like an array. A tuple is represented by parentheses and commas, rather than the square brackets used by arrays. Tuples are mostly good for small fixed-length collections — they're used everywhere in Julia, for example, as argument lists and for returning multiple values from functions. The important difference between arrays and tuples is that tuples are immutable.
I do not like that you have to input the variables as a Tuple
.
What if you are interested in half of the variables for a 200 variable model?
Then you have to create a tuple containing all those numbers.
With AbstractVector
we can do it easily by 1:100
.
With AbstractVector we can do it easily by 1:100.
There is ntuple(n->n,100)
, which is a bit more verbose.
The motivation of the "Common solving options" vs. the "Advanced solve options" of above was in reality that we start making a layer for the solve
interface, such that it can work with different algorithms. On a lower level you have e.g. the current reach
for affine ODEs with nondeterministic inputs, but one could use this solve
interface with a reach algorithm for nonlinear ones. This is all wishful thinking but since we are at it...
I find this inconsistent:
since the block 1 has not been computed at all,
plot_vars
cannot be[0, 1]
!