gem / oq-engine

OpenQuake Engine: a software for Seismic Hazard and Risk Analysis
https://github.com/gem/oq-engine/#openquake-engine
GNU Affero General Public License v3.0
377 stars 273 forks source link

Assessment of liquefaction-induced damage and losses #6081

Closed VSilva closed 1 year ago

VSilva commented 3 years ago

Introduction Within the scope of the TREQ project, a new hazard module has been implemented to assess displacement fields due to liquefaction. This document describes the estimate of damage and losses due to liquefaction-induced ground deformations. This process differs from the already implemented loss assessment methodology due to the need to account for the probability of liquefaction at each site. This calculator should be able to provide three main types of outputs: 1) average annual number of buildings in each damage state due to liquefaction and 2) average annual losses and probable maximum losses. In order to demonstrate this calculation workflow, three input files have been created:

  1. A set of ground deformation fields - File 1
  2. An exposure model (with 3 assets) - File 2
  3. A set of fragility functions (2 functions) - File 3

These files are further described in the calculation steps below.

Calculation Steps for the average number of buildings in each DS

  1. For the purpose of these specifications, it has been assumed that the format of the ground deformation fields will be similar to the current ground motion fields. This translates into tables with six columns: rlzi, sid, eid, gmv_VS, gmv_LS and PoL. Where gmv_VS stands for the vertical settlement, gmv_LS stands for the lateral spreading and PoL stands for the probability of liquefaction.
  2. The fragility and vulnerability functions for liquefaction usually accept only one intensity measure, and not two (i.e. gmv_VS, gmv_LS). Thus, it is necessary for the user to specify whether he/she wants to use the maximum displacement or the geometric mean. This implies that we need a parameter in the job file (deformation_component), which can be equal to two values (maximum or geometric_mean).
  3. In this calculator OpenQuake will calculate first damage, and then potentially apply a consequence model for the estimation of the losses. For each event in the stochastic event set, the hazard module of OpenQuake will provide the two components of deformation (i.e. gmv_VS, gmv_LS). Then, depending on the deformation_component option, OpenQuake will either use the maximum component (max()) or the geometric mean (ex. numpy.sqrt(gmv_VS^2+gmv_LS^2)), to estimate the probabilities of damage, exactly in the same manner that is currently done in OpenQuake for ground shaking.
  4. Using the ground deformation fields (File 1), the exposure dataset (File2) and the set of fragility functions (File 3), and assuming deformation_component = maximum, the the following probabilities of occurrence of each damage state (Prob(DS)) must be obtained for each event and asset:
eid Asset1_DS1 Asset1_DS2 Asset2_DS1 Asset2_DS2 Asset3_DS1 Asset3_DS2
0 0.007 0.993 0.088 0.911 0.703 0.235
1 0.540 0.436 0.000 0.000 0.000 0.000
2 0.629 0.333 0.035 0.000 0.760 0.088
3 0.542 0.007 0.546 0.007 0.016 0.984
4 0.004 0.996 0.515 0.457 0.357 0.635
5 0.000 1.000 0.442 0.539 0.000 0.000
6 0.004 0.996 0.750 0.099 0.000 0.000
7 0.003 0.997 0.000 0.000 0.000 0.000
  1. To calculate the number of buildings in each damage state, it is necessary to multiply the Prob(DS) by the associated number of buildings. This example would lead to the results presented below:
eid Asset1_DS1 Asset1_DS2 Asset2_DS1 Asset2_DS2 Asset3_DS1 Asset3_DS2
0 0.1 9.9 1.8 18.2 21.1 7.1
1 5.4 4.4 0.0 0.0 0.0 0.0
2 6.3 3.3 0.7 0.0 22.8 2.6
3 5.4 0.1 10.9 0.1 0.5 29.5
4 0.0 10.0 10.3 9.1 10.7 19.1
5 0.0 10.0 8.8 10.8 0.0 0.0
6 0.0 10.0 15.0 2.0 0.0 0.0
7 0.0 10.0 0.0 0.0 0.0 0.0
  1. The previous results assume that liquefaction did occur. However, unlike what happened with ground shaking, in liquefaction risk assessment there is a chance that liquefaction did not occur at a particular location, hence the need to calculate the probability of liquefaction (PoL), as indicated in the ground deformation fields (File 1). In order to calculate both the average annual number of buildings in each damage state and the standard deviation, it is necessary to perform a Monte Carlo simulation. In this process, for each event, a number of simulations are performed, which means that we need to have a parameter in the job file to control this sampling process (e.g. no_liquefaction_simulations which by default could be equal to 50). These 50 simulations are performed for at each location, we use the PoL to sample whether liquefaction occurs or not (the result at each location is a 1 or 0 - yes or no). This means that if we have 3 locations, we can have liquefaction in only 1 site, 2 sites, in all of them or in none of them. An example of this simulation is presented below only for 3 simulations for the first 5 events.
eid Simulation Site 1 Site 2 Site 3
0 1 0 0 0
0 2 0 0 0
0 3 0 0 0
1 1 0 0 0
1 2 0 0 0
1 3 0 0 0
2 1 0 0 1
2 2 0 0 0
2 3 0 0 0
3 1 0 1 0
3 2 0 0 0
3 3 0 1 0
4 1 1 1 0
4 2 0 1 0
4 3 0 1 1

Note: The PoL tends to be quite small, so it is normal to have thousands of simulations without significant liquefaction. In this exercise it is possible to observe that event eid = 0 never led to liquefaction in the region. Event eid=4 on the other hand caused liquefaction at least at one site in all of the simulations.

  1. The average annual number of buildings in each damage state can be calculated at the asset level or for the entire portfolio. To calculate the average annual number of buildings in each damage state it is necessary to first multiply the results from the liquefaction simulations (previous table comprised by 0’s and 1’s) by the results from the table presented in Step 5. These values should also be summed per event in order to calculate the same metric for the entire portfolio. These results are presented below only for complete damage.
eid Simulation Site 1 Site 2 Site 3 Total
0 1 0 0 0 0
0 2 0 0 0 0
0 3 0 0 0 0
1 1 0 0 0 0
1 2 0 0 0 0
1 3 0 0 0 0
2 1 0 0 22.8 22.8
2 2 0 0 0 0
2 3 0 0 0 0
3 1 0 0.1 0 0.1
3 2 0 0 0 0
3 3 0 0.1 0 0.1
4 1 10.0 9.1 0 19.1
4 2 0 9.1 0 9.1
4 3 0 9.1 19.1 28.2

Note: These results present what might have occurred in reality. An earthquake occurred, and in some sites liquefaction was triggered, and in those sites, we have the number of buildings in each damage state.

  1. For the calculation of the average annual number of buildings in each damage state and the associated standard deviation, one simply needs to calculate the mean and standard deviation across all events and simulations. For the mean, it is also necessary to divide by the investigation time. For the purpose of this exercise, let us assume an investigation time of 50 years, and in order to have absolute convergence in the statistical parameters, let us perform 1000 simulations for each event. The average number of buildings in each damage state is presented in the table below, along with the associated standard deviation.
  Asset 1 DS1 Asset 1 DS2 Asset 2 DS1 Asset 2 DS2 Asset 3 DS1 Asset 3 DS2
Annual average 0.063 0.197 0.204 0.124 0.142 0.215
Standard deviation 1.513 3.064 3.506 3.051 4.020 5.478
  1. For the estimation of the same metrics considering the entire portfolio, the same procedure is applied, but considering the sum of the number of buildings per damage state per simulation. These results are presented below.
  All Assets DS1 All Assets DS2
Annual average 0.409 0.536
Standard deviation 5.319 6.833

The estimation of these metrics for all assets can be an extremely computational-demanding process, so by default OpenQuake should only return the average annual number of buildings per damage state considering the entire portfolio.

Calculation Steps for the estimation of losses

  1. The estimation of the average annual losses follows a procedure identical to the one explained previously. However, instead of using fragility curves, one should use vulnerability curves or a consequence model. I propose to use vulnerability functions for the purpose of this calculator. For this exercise, two vulnerability functions have been considered (File 4).
  2. Similarly to what was presented previously for Step 4, the table below presented the loss ratios for each asset, considering all events.
eid Asset 1 LR Asset 2 LR Asset 3 LR
0 0.997 0.955 0.587
1 0.707 0.000 0.000
2 0.647 0.017 0.468
3 0.278 0.279 0.992
4 0.998 0.715 0.813
5 1.000 0.760 0.000
6 0.998 0.474 0.000
7 0.999 0.000 0.000
  1. These loss ratios have to be multiplied by the corresponding economic values to estimate the actual economic losses per asset and per event, as presented below:
eid Asset 1 LR Asset 2 LR Asset 3 LR
0 498.3 1909.5 879.9
1 353.3 0.0 0.0
2 323.5 34.5 702.2
3 138.9 558.7 1488.0
4 498.9 1430.2 1220.1
5 500.0 1520.7 0.0
6 498.9 948.3 0.0
7 499.3 0.0 0.0
  1. Once again it is necessary to account for the fact that liquefaction does not always occur at every location, so the Monte Carlos sampling procedure has to be implemented for each event, as explained previously in Step 6. By performing this sampling process, the losses are then summed across all simulations, and divided by the investigation time and number of simulations. The standard deviation is also computed across all simulations, leading to the results presented below for both the assets and the total portfolio.
  Asset 1 LR Asset 2 Asset 3 All Assets
AAL 11.1 22.5 12.6 46.2
Std Dev 161.6 404.7 295.8 521.5
  1. The last risk output are loss exceedance curves. The process is similar to what has been already implemented in the OpenQuake-engine. It is necessary to sort (in descend order) all of the losses, and calculate the rate of exceeding each loss level. The only difference is that in this case, in addition to dividing the number of exceedances by the investigation time (as already done in OpenQuake), it is necessary to divide also by the number of simulations (since there are more losses than the number of events). The table below shows the losses for a number of return periods. Since the investigation time in this example was only 50, the return periods are low.
Annual rate Return period Loss
0.02 50 2789.4
0.1 10 2650.2
0.2 5 2046.7
1 1 1909.5
micheles commented 3 years ago

The fragility and vulnerability functions for liquefaction usually accept only one intensity measure, and not two (i.e. gmv_VS, gmv_LS). Thus, it is necessary for the user to specify whether he/she wants to use the maximum displacement or the geometric mean. This implies that we need a parameter in the job file (deformation_component), which can be equal to two values (maximum or geometric_mean).

I do not understand that, since you did not provide the fragility functions in XML format. I would think that the choice between maximum or geometric_mean would be done at the fragility function level; for instance in case of maximum I would expect something like this:

    <fragilityModel
    assetCategory="building"
    id="Vitor-provided"
    lossCategory="structural"
    >
        <description>
            Fragility model for liquefaction
        </description>
        <limitStates>
            moderate complete
        </limitStates>
        <fragilityFunction
        format="continuous"
        id="A"
        shape="logncdf"
        >
            <imls imt="PGDMax"/>
            <params ls="moderate" mean="0.12" stddev="0.03"/>
            <params ls="complete" mean="0.20" stddev="0.04"/>
        </fragilityFunction>
        <fragilityFunction
        format="continuous"
        id="B"
        shape="logncdf"
        >
            <imls imt="PGDMax"/>
            <params ls="moderate" mean="0.15" stddev="0.04"/>
            <params ls="complete" mean="0.25" stddev="0.05"/>
        </fragilityFunction>
    </fragilityModel>

While for the geometric mean there would be different fragility functions with imt="PGDGeomMean". Or am I wrong and there are two sets of fragility functions one for PGDLatSpread and one for PGDSettle? Then show should I combine them?

VSilva commented 3 years ago

There are only a handful of fragility/vulnerability functions in the world for liquefaction, so I think it is unlikely that we will have one for vertical settlement and another for lateral spreading.

I do like the idea of including in the fragility function (xml) file information about which IM is being used. In empirical functions it will most likely be the maximum PGD while for analytical studies it might be possible to use the PGDGeomMean.

Nonetheless, it is still necessary on the hazard side (at the level of generation of displacement fields) to indicate if the geometric mean or the maximum component was used.

micheles commented 3 years ago

the geometric mean (ex. numpy.sqrt(gmv_VS^2+gmv_LS^2))

@VSilva here I assume that there was a misprint and that the correct formula is numpy.sqrt(gmv_VS^2*gmv_LS^2) otherwise it would not be the geometric mean, just the diagonal. Please confirm.

micheles commented 2 years ago

I am not sure if we can close this given that the TREQ project ended