fdschneider / caspr

Cellular Automata for Spatial Pressure in R
MIT License
4 stars 0 forks source link

define stability check for model run #17

Open fdschneider opened 9 years ago

fdschneider commented 9 years ago

To evaluate the end of transient dynamics, caspr measures stability as the difference in mean over two subsequent periods in time and stops the simulation if this falls below a threshold.

I suspect that the different models show a different pattern in temporal timeseries and different threshold values might apply. Is there any objective way of defining stability in the different models?

guttal commented 9 years ago

Although there won't be a universal way to define end of transient state or beginning of 'steady state' (I guess that is what you mean instead of 'stability') - here is one possible way. If you plot both mean density and variance in density (both calculated for a window of time) - if both stabilise, thats an indication of arrival of steady state, or end of transients.

Some issues are that you may be able to do for one of the quantities but not for all (esp if something is cyclical). For example, mean density may reach a steady state in 500 iterations whereas patch-sizes may take much longer. There is no reason to assume all quantities reach steady state at the same rate/time. Fortunately, though, (from my little experience) accounting for one of them is usually good enough.

fdschneider commented 9 years ago

After the discussion we had yesterday, I think we will check that the difference in mean density/cover in two subsequent evaluation periods is not larger than a threshold value. This means that two parameters define that criterion: the threshold value steady and the length of the evaluation period t_eval.

For our comparative study, we would need a sound definition of the end of the transient dynamics for each of our models. They differ very much in variation over time and their transient behaviour. t_evalalso defines how many snapshots of the landscape are saved for investigation. If taking snapshots every 50 timesteps, and if we decide to keep 20 snapshots, our t_eval must be 20 * 50 = 1000.

So, somebody should check the timeseries of some representative cases, particularly close to catastrophic shifts, and see which threshold values should be applied for each model on our target lattice size.

fdschneider commented 9 years ago

I will add a suggested value of t_eval , steady and width to the model objects which will be used by the function ca() as a default. Is there a mathematical logic that would allow us to adjust our steadyness criterion to lattice size? Does variation of cover linearly increase with area of the lattice? Not sure about that.

We also thought about putting those values into a stability criterion function, which could be user specified. This is quite complicated, though.

fdschneider commented 9 years ago

I changed the default behaviour to stop simulation not before t_max (The frequency of snapshots of the full lattice now is independent of this value). The auto-stop function is still problematic.

For now, I added a switch stopifsteady = FALSE in ca(), which if set to TRUE will apply a function provided in the parameter steady. By default that is a function comparing the means of two subsequent periods that returns TRUE and terminates the simulation if the difference in means falls below a threshold value.

Besides the dependency on lattice size (see above) I found that this does not work as intended, since the variation in the difference in means is quite stochastic over time. Thus, the threshold value might be reached pretty soon by chance, without indicating the end of transient dynamics at all. Either we need a new method for testing the end of transient dynamics (e.g. something like an Augmented Dickey–Fuller test?) or we just define a duration of simulations that secures steady state dynamics for all our models.

@guttal @skefi: Any suggestions?