This is a library of test models and results for development of the QSS solver being developed as part of the "Spawn of EnergyPlus" project. The library serves these purposes:
Models will be run as FMUs since the QSS solver is being built for integration into JModelica via the FMU interface. Some simpler models may also be run as QSS "code-defined" models for results and performance comparison.
Due to the size of model FMUs, modelDescription.xml, and output signal files this repository stores only scripts, models, and descriptive text files.
The top-level repository directory contains these subdirectories:
bin/
mdl/
bin
The bin
directory contains scripts for modeling and testing:
bld.py
Default bld_fmu.py
wrapperbld_fmu.py
Builds model FMU with OPTIMICA or JModelica depending on the current directorybld_fmus.py
Builds all model FMUs with OPTIMICAcleanup
Removes comparison/regression testing output filescmp_CVode_QSS3_Buildings.py
Run and compare CVode and QSS3 simulations for a set of Buildings library modelscmp_CVode_QSS3_simple.py
Run and compare CVode and QSS3 simulations for a set of simple modelscmp_PyFMI_QSS.py
Run and compare PYFMI and QSS simulations for the local modelcmp_PyFMI_QSS_hdr.py
Generate the YAML file header for a PyFMI vs QSS comparison runcmp_PyFMI_QSS_yaml.py
Compare the YAML results file for two PyFMI vs QSS comparison runscomparison
Compares results from two modeling toolscsv2ascii.py
Converts CVS files to ASCII filesjm*
Wraps jm_python.sh
: Customize to your systemjmi*
Wraps jm_ipython.sh
: Customize to your systemref.py
Runs the model's FMU with PyFMI or QSS depending on the local directory: small tolerance "reference" solutionregression
Regression tests results from two versions of a modeling toolrun.py
Runs the model's FMU with PyFMI or QSS depending on the local directoryrun_PyFMI.py
Runs the model's FMU with PyFMIrun_PyFMI_red.py
Runs the model's FMU with PyFMI with optional output redirection via a --red=LOGFILE
optionrun_PyFMI_run.py
Runs the model's FMU with PyFMI with output redirection to a run.log
filerun_QSS.py
Runs the model's OCT FMU with QSS (supports QSS options)set_JModelica
Sets environment for JModelica: Customize to your systemset_Modelica
Sets environment for Modelica and the Buildings library: Customize to your systemset_OCT
Sets environment for OCT: Customize to your systemsimdiff.py
Simulation results comparison toolstp_QSS.py
Runs the models' FMU with QSS and check/report step countsstp_QSS_simple.py
Check/report step counts for a set of simple models with known/expected step countsbin
Notesjm_python.sh
needs to be on your PATH to use some of these scripts.mdl
The mdl
directory contains the models and results with this (tentative) organization for each model:
ModelName/
ModelName.mo Modelica model
ModelName.ref Modelica or Buildings Library model name and, optionally, Buildings Library branch and/or commit
ModelName.txt Notes
ModelName.var Variables to output name list (supports glob wildcards and syntax)
Dymola/ Dymola model & results
JModelica/ JModelica model & results
OCT/ OPTIMICA model & results
Ptolemy/ Ptolemy model & results
QSS/ QSS model & results
Each non-QSS modeling tool (OCT, JModelica, Dymola, and Ptolemy) sub-directory has this structure:
*Tool*/
ModelName.txt Notes
ModelName.mo Modelica model customized for Tool if needed
ModelName.fmu FMU generated by Tool (with a modified XML inserted)
modelDescription.orig.xml Original FMU XML (if XML modifications needed)
modelDescription.prep.xml Modified FMU XML (if XML modifications needed)
modelDescription.xml XML from the FMU (after any modifications are made)
out/ Results
run/ Standard run results
ref/ Reference run results
Tools-specific versions of Modelica files include customization for that tool:
The QSS sub-directory has this structure:
QSS/
ModelName.txt Notes
ModelName.fmu Specialized FMU used for QSS runs if needed
FMU-[LI]QSS#/ FMU-QSS [LI]QSS# run
[LI|x]QSS#/ [LI|x]QSS# run
where # is 1, 2, or 3 indicating the QSS method order.
The QSS subdirectories may have a custom run.py
script with specialized options suggested or needed for the model.
The FMU-QSS subdirectories have a run
script that generates the FMU-QSS and then runs it with the QSS application.
The QSS2 method is currently the best choice in most circumstances (QSS3 performance and accuracy are limited due to numerical differentiation) so the other sub-directories may not be present. The LIQSS2 method is probably the best for "stiff" models. The first-order QSS1 and LIQSS1 methods are mostly of academic interest since they are very slow for most models.
Notes on each of the modeling tools appear below.
.Tool1-Tool2
.--coarse
option when differencing need to be explored.OPTIMICA is the default Modelica tool for QSS now that it has event indicator and other QSS-specific support.
JModelica lacks this QSS support and is being retired but can still be used for limited QSS modeling so it is still supported here.
FMUs can be built directly from models in the Buildings Library by placing a ModelName.ref file alongside the ModelName.mo file. The ModelName.ref file is a text file with these lines:
Modelica or Buildings Library full model name
Buildings Library branch if not master (optional)
Buildings Library commit hash if not HEAD of branch (optional)
Here is the FloorOpenLoop.ref file:
Buildings.ThermalZones.EnergyPlus.Examples.VAVReheatRefBldgSmallOffice.FloorOpenLoop
issue1129_energyPlus_zone
This model exists in the issue1129_energyPlus_zone branch in the HEAD commit.
Models that are defined in the local .mo file but depend on a specific branch/commit of the Buildings library should use a ref file as above but with just Buildings
in the first line.
Run bld_fmu.py
from the OCT
sub-directory of the model's directory.
Run bld_fmu.py
from the JModelica
sub-directory of the model's directory.
Run run.py
or run_PyFMI.py
from the desired output sub-directory under the modeling tool sub-directory of the model's directory, such as MyModel/JModelica/out.
--ncp
and --final_time
are accepted by these scripts.Run run.py
or run_QSS.py
in each QSS method sub-directory of the model's QSS
sub-directory.
Custom run.py
scripts may be present under QSS
with recommended or needed QSS options for that model.
Notes:
--out
and --zFac
are accepted by these scripts.modelDescription.xml
DefaultExperiment
section.The use of JModelica-generated FMUs with QSS requires special treatment:
__zc_
VariableName and their derivatives assigned to variables with names of the form __zc_der_
VariableName.modelDescription.xml
files in the FMU files need to be modified for QSS use in some cases. The FMU files are zip files so the modelDescription.xml
files can be extracted with unzip
, modified, and then updated in the FMU by running zip -f
.modelDescription.xml
by JModelica-generated FMUs:
DiscreteStates
section (between the Derivatives
and InitialUnknowns
sections) with dependency on the corresponding zero-crossing variable(s).InitialUnknowns
section with dependency on the corresponding zero-crossing variable(s).The cmp_PyFMI_QSS.py
script will run and compare the PyFMI and QSS simulations of the local model.
In addition to passing PyFMI and QSS options through it accepts options such as:
--cmp=
Variable to specify a variable to compare--cmp=
Variable=
RMS to specify a variable to compare and an RMS difference limit to compare against--red=
File to redirect output to the specified fileThe PyFMI and QSS runs are set to use only sampled output to aid in the automated comparison: sampled QSS output may not show key events accurately.
Comparison wrapper scripts, such as cmp_CVode_QSS3_Buildings.py
, can be used to run the comparison on a set of models, including any desired custom options.
By including RMS "pass" limits these can serve as a type of regression test to make sure that OCT and QSS updates do not cause unexpected solution discrepancies.
The cmp_PyFMI_QSS_yaml.py
script can compare the YAML results file from two comparison runs with an optional relative tolerance argument to use when comparing variable RMS differences.
Run the comparison
script from the tst
sub-directory of the model's directory passing the directories of the two results to be compared, such as:
comparison ../OCT ../QSS/QSS2
This generates report (.rpt
) files for each pair of signals compared, a summary (.sum
) file listing the number of signal comparisons that pass and that fail, a 0-size pass (.pass
) or fail (.fail
) file, and PDFs with plots of signal pairs that fail, showing the signal overlay and difference plots.
out
sub-directory of the specified directory if no .out
files are found.comparison
wraps simdiff.py
with the default comparison testing options.comparison
may not be appropriate for all models.--coarse
option in simdiff.py
can reduce this effect when one signal has much more frequent sampling by only measuring the difference at the time steps of the "coarser" (lower sampling rate) signal. A combination of the --coarse
option, refining the tolerances, and adjusting the QSS output sampling rates will probably be used to obtain more meaningful comparisons. (PyFMI doesn't appear to offer a method of decreasing the simulation time steps to obtain a higher samping rate.) For now many models with a very good match between modeling tools will report as "failed" by comparison.Run the regression
script from the tst
sub-directory of the model's directory passing the directories of the two results to be compared, such as:
regression ../QSS/QSS2/new ../QSS/QSS2
This generates report (.rpt
) files for each pair of signals compared, a summary (.sum
) file listing the number of signal comparisons that pass and that fail, a 0-size pass (.pass
) or fail (.fail
) file, and PDFs with plots of signal pairs that fail, showing the signal overlay and difference plots.
out
sub-directory of the specified directory if no .out
files are found.regression
wraps simdiff.py
with the default regression testing options.regression
may not be appropriate for all models.