This framework calculates, analyses and compares the following systemic risk measures:
BUB (Bubbles Flag)
BMPH (Boom Phases Flag)
BRPH (Burst Phases Flag)
BC (Bubbling Capitalization)
BCP (Bubbling Capitalization Percentage)
AR (Absorption Ratio)
by Kritzman et al. (2010)CATFIN
by Allen et al. (2012)CS (Correlation Surprise)
by Kinlaw & Turkington (2012)TI (Turbulence Index)
by Kritzman & Li (2010)Principal Component Analysis
DCI (Dynamic Causality Index)
CIO ("In & Out" Connections)
CIOO ("In & Out - Other" Connections)
Network Centralities: Betweenness, Degree, Closeness, Clustering, Eigenvector & Katz
JPoD (Joint Probability of Default)
FSI (Financial Stability Index)
PCE (Probability of Cascade Effects)
DiDe (Distress Dependency)
SI (Systemic Importance)
SV (Systemic Vulnerability)
CoJPoDs (Conditional Joint Probabilities of Default)
Full Cross-Quantilograms
Partial Cross-Quantilograms
Idiosyncratic Metrics: Beta, Value-at-Risk & Expected Shortfall
CAViaR (Conditional Autoregressive Value-at-Risk)
by White et al. (2015)CoVaR & Delta CoVaR (Conditional Value-at-Risk)
by Adrian & Brunnermeier (2008)MES (Marginal Expected Shortfall)
by Acharya et al. (2010)SES (Systemic Expected Shortfall)
by Acharya et al. (2010)SRISK (Conditional Capital Shortfall Index)
by Brownlees & Engle (2010)D2C (Distance To Capital)
by Chan-Lau & Sy (2007)D2D (Distance To Default)
by Vassalou & Xing (2004)DIP (Distress Insurance Premium)
by Black et al. (2012)SCCA (Systemic Contingent Claims Analysis)
by Jobst & Gray (2013)ILLIQ (Illiquidity Measure)
by Amihud (2002)RIS (Roll Implicit Spread)
by Hasbrouck (2009)Classic Indicators: Hui-Heubel Liquidity Ratio, Turnover Ratio & Variance Ratio
2-States Model: High & Low Volatility
3-States Model: High, Medium & Low Volatility
4-States Model: High & Low Volatility With Corrections
AP (Average Probability of High Volatility)
JP (Joint Probability of High Volatility)
SI (Spillover Index)
Spillovers From & To
Net Spillovers
ACHI (Average Chi)
by Balla et al. (2014)ADR (Asymptotic Dependence Rate)
by Balla et al. (2014)FRM (Financial Risk Meter)
by Mihoci et al. (2020)Some of the aforementioned models have been improved or extended according to the methodologies described in the V-Lab Documentation, which represents a great source of systemic risk measurement.
The project has been published in "MATLAB Digest | Financial Services | May 2019"
.
If you found it useful to you, please consider making a donation to support its maintenance and development:
The minimum required MATLAB
version is R2014b
. In addition, the following products and toolboxes must be installed in order to properly execute the script:
run.m
to perform the computation of systemic risk measures;analyze.m
to analyze previously computed systemic risk measures.Datasets must be built following the structure of default ones included in every release of the framework (see Datasets
folder). Below a list of the supported Excel sheets and their respective content:
Shares: prices or returns expressed in logarithmic scale of the benchmark index (the column can be labeled with any desired name and must be placed just after observation dates) and the firms, with daily frequency.
Volumes: trading volume of the firms expressed in currency amount, with daily frequency.
Capitalizations: market capitalization of the firms, with daily frequency.
CDS: the risk-free rate expressed in decimals (the column must be called RF
and must be placed just after observation dates) and the credit default swap spreads of the firms expressed in basis points, with daily frequency.
Balance Sheet Components: the balance sheet components of the firms expressed in omogeneous observations frequency, currency and scale, structured as below:
State Variables: systemic state variables, with daily frequency.
Groups: group definitions are based on three-value tuples where the Name
field represents the group names, the Short Name
field represents the group acronyms and the Count
field represents the number of firms to include in the group. The sum of the Count
fields must be equal to the number of firms. For example, the following groups definition:
Firms in the Shares Sheet: A, B, C, D, E, F, G, H
Insurance Companies: 2
Investment Banks: 2
Commercial Banks: 3
Government-sponsored Enterprises: 1
produces the following outcome:
"Insurance Companies" contains A and B
"Investment Banks" contains C and D
"Commercial Banks" contains E, F and G
"Government-sponsored Enterprises" contains H
Crises: crises can be defined using two different approaches:
Date
field represents the event dates and the Name
field represents the event names; every dataset observation matching an event date is considered to be associated to a distress occurrence.Name
field represents the crisis names, the Start Date
field represents the crisis start dates and the End Date
field represents the crisis end dates; every dataset observation falling inside a crisis range is considered to be part of a distress period.The minimum allowed dataset must include the Shares
sheet with a benchmark index and at least 3
firms. Observations must have a daily frequency and, in order to run consistent calculations, their minimum required amount is 253
for prices (which translates into a full business year plus an additional observation at the beginning of the time series, lost during the computation of returns) or 252
for logarithmic returns. They must have been previously validated and preprocessed by:
It is not mandatory to include financial time series used by unwanted measures. Optional financial time series used by included measures can be omitted, as long as their contribution isn't necessary. Below a list of required and optional time series for each category of measures:
Firms whose time series value is constantly equal to 0
in the tail, for a span that includes a customizable percentage of total observations (by default 5%
), are considered to be defaulted
. Firms whose Equity
value is constantly negative in the tail, for a span that includes a customizable percentage of total observations (by default 5%
), are considered to be insolvent
. This allows the scripts to exclude them from computations starting from a certain point in time onward; defaulted firms are excluded by all the measures, insolvent firms are excluded only by SCCA
default measures.
Once a dataset has been parsed, the script stores its output in the form of a .mat
file; therefore, the parsing process is executed only during the first run. The file last modification date is taken into account by the script and the dataset is parsed once again if the Excel
spreadsheet is modified.
The dataset parsing process might present issues related to version, bitness and regional settings of the OS
, Excel
and/or MATLAB
. Due to the high number of users asking for help, support is no more guaranteed; the guidelines below can help solving the majority of problems:
OS
and Excel
may cause errors that are difficult to track. Using the same bitness for both is recommended.Excel
locale other than English
may produce wrong outputs related to date formats, text values and numerical values with decimals and/or thousands separators. A locale switch is recommended.Excel 2019
and Excel 365
may present compatibility issues with MATLAB
versions prior to R2019b
. In later versions, the built-in function readtable
may still not handle some Excel
spreadsheets properly. A downgrade to Excel 2016
is recommended.Excel
spreadsheets might contain empty but defined cells in columns or rows located far away from the area in which data is stored. Those cells extend the range being read by the parser, producing false positives when checking for missing values.ScriptsDataset\parse_dataset.m
function. It is important to check the correctness of the arguments being passed to the function call. Error messages thrown by the aforementioned function are pretty straightforward and a debugging session should be enough to find the underlying causes and fix datasets and/or internal functions accordingly.Excel
spreadsheet (.xlsx
) with no filters and styles, or a binary Excel
spreadsheet (.xlsb
).Some scripts may take very long time to finish in presence of huge datasets and/or extreme parametrizations. The performance of calculations may vary depending on the CPU processing speed and the number of CPU cores available for parallel computing.
The Datasets
folder includes many premade datasets. The main one (Example_Large.xlsx
), based on the US financial sector, defines the following entities and data over a period of time ranging from 2002
to 2019
(both included):