This repository is a resource for testing system dynamics software and translation tools. It provides a standard set of simple test cases in various formats, with a proposed canonical output for that test.
Folders within the Test directory contain models that exercise a minimal amount of functionality (such as lookup tables) for doing unit-style testing on translation and simulation pathways.
Folders within the Samples directory contain complete models that can be used for integration tests, benchmarking, and demos.
Each model folder contains:
output.csv
or output.tab
) containing (at least)
the stock values over the standard timeseries in the model filesREADME.md
containing:
For a demonstration, see the teacup example
All members of the SD community are invited to contribute to this repository. To do so, create a fork, add your contribution using one of the following methods, add yourself to the AUTHORS file, then submit a pull request.
To request that a specific test be added, create an issue on the issues page of this repository.
Many of these cases have model files for some modeling formats but not others. To add a model file
in another format, check that your model output replicates the 'canonical example' to reasonable
fidelity, preferably using identical variabale names, and add an entry to the contributions table
in the directory's README.md
file.
To add a new case, in your local clone add a folder in either the tests
or benchmarks
directory
as appropriate, copy an example README.md
file from another folder, and edit to suit your needs.
To simplify tools and scripts around model validation, canonical output files should be UTF8 tab-separated or comma-separated files. Each row represents model results at a single timestep (rather than each row representing a single variable's results for every timestep).
The following process ensures that output files end up in the format expected by tools that interact with this repository.
Export Data
modal dialog, choose One Time
as the Export TypeExport Data Source
, make sure both the Export all model variables
and Every DT - Export every intermediate value during the run
are selectedExport Destination
, choose Browse and name the file
output.csv
, and make sure the left-most checkbox below Browse is
selected. You may have to create an empty file named output.csv
manually beforehand in your operating system's file browser. Ensure
that of the two Data
styles (columnar on the left, horizontal on the
right) the left-most (columnar results) is selected. This is the default.OK
at the bottom right to perform the exportModel
menu, choose Export Dataset...
Current.vdf
Export To
button,
change the name of the export file from Current
(or whatever your run
name was) to output
Export As
choose tab
Time Running
choose down
output.tab
in excel or whatever, and make sure that the values of constant terms are propagated down the column for each timestep.There are 2 scripts in the top level of this repo to aid in debugging,
compare.py and regression-test.py.
compare.py
expects paths to two CSV or TSV files, and will compare the
results of the two files, with some amount of smartness/fuzziness
around floating point comparisions.
regression-test.py
can be used to compare a specific modeling tool's
output against the accepted, canonical output for a given model (which
is stored in output.csv
in all the subdirectories of this
repository). It can be run with external tools with the current
working directory as the root of this project:
$ ./regression-test.py ~/src/libsd/mdl .
$ ./regression-test.py ~/src/sd.js/mdl.js .
And it can also be run from outside of this project, for example when
this test-models
repo is included as a git
submodule in
another project:
$ test/test-models/regression-test.py ./mdl test/test-models
The main requirement is that the given command
(mdl
and
mdl.js
above)
accept the path to a model as an argument, and output model results to
stdout
in either TSV or CSV format. If your tool requires
additional commandline args, you can specify them with quoting:
$ ./regression-test.py "~/path/to/tool --arg1 --arg2" .
And if you have a tool that simulates Vensim models or Stella v10 models rather than xmile, you can change the model-file suffix:
# test Vensim model files
$ ./regression-test.py --ext mdl ~/path/to/tool .
# test Stella v10 XMILE-variant model files
$ ./regression-test.py --ext stmx ~/path/to/tool .