The docs/case
directory contains the case description.
Add your prerequisites here!
The scripts
directory contains the run.py
script.
At a first glance, invoke it without any arguments so that the solution will be built, benchmarked, running times visualized and the results compared to the reference solution's.
One might fine tune the script for the following purposes:
run.py -b
-- builds the projectsrun.py -b -s
-- builds the projects without testingrun.py -g
-- generates the instance modelsrun.py -m
-- run the benchmark without buildingrun.py -v
-- visualizes the results of the latest benchmarkrun.py -e
-- compare results to the reference output. The benchmark shall already been executed using -m
.run.py -m -e
-- run benchmark without building, then extract and compare results to the reference outputrun.py -t
-- build the project and run tests (usually unit tests as defined for the given solution)The config
directory contains the configuration for the scripts:
config.json
-- configuration for the model generation and the benchmark
-m
for run.py
), so it is not applied to e.g. the build phase (see -b
for run.py
).reporting.json
-- configuration for the visualizationThe script runs the benchmark for the given number of runs, for the specified tools and change sequences.
The benchmark results are stored in a CSV file. The header for the CSV file is stored in the output/header.csv
file.
Make sure you read the README.md
file in the reporting
directory and install all the requirements for R.
To implement a tool, you need to create a new directory in the solutions directory and give it a suitable name.