kartoza / fbf-project

Project data and resources for WB Forecast Based Financing work
23 stars 15 forks source link

Trigger calculation #73

Open timlinux opened 5 years ago

timlinux commented 5 years ago

Pre-computed flood boundary for e.g. 100 year

@lucernae generate a sample dataset for 20year since the trigger conditions Hassan proposed are for 10 or 20 years. @lucernae make some sample datasets.

http://www.globalfloods.eu/proxy/?srv=ows&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetFeatureInfo&FORMAT=image%2Fpng&TRANSPARENT=true&QUERY_LAYERS=sumAL43EGE%2CreportingPoints&TIME=2019-12-01T00%3A00%3A00&LAYERS=sumAL43EGE%2CreportingPoints&INFO_FORMAT=application%2Fjson&I=797&J=75&WIDTH=832&HEIGHT=832&CRS=EPSG%3A3857&STYLES=&BBOX=9966573.160085278%2C-1304525.282733674%2C11323279.454128297%2C52181.01130934769

image

timlinux commented 5 years ago

Option 1:

Use the e.g. 5 year WMS product and do a colour lookup.

If it is a 5 year return period then probability needs to exceed 75% for trigger

@Mazano to set up a new version of our docker image that incliudes pgcron (https://github.com/citusdata/pg_cron) and pl/python

@lucernae : To implement the glofas fetch routine e.g. every day and populate any new floods

lucernae commented 5 years ago

I will explain here what I imagine would happen:

The source of recommendation is based on Hassan's current recommendation from field visit report (will attach it in the repo later).

The criteria

There are two stage triggers: Pre activation and activation.

Pre activation

Quoted from the recommendation:

1)  Pre-activation trigger to get ready with resource in the chapter using GLOFAS. Activation Hazard criteria are
a.  Hazard > 10 year return period 
b.  50% or more Probability 
c.  Lead time minimum 10 days
d.  20% more house likely to be damage

Activation

Whichever municipality area that is in the stage of Pre-activation should be checked again for Activation trigger. Do not check for Activation trigger if it doesn't have pre-activation yet. (No time for readiness). Quoted from the recommendation:

2)  Activation trigger to activate EAP – evacuation 
a.  Hazard > 10 year return period 
b.  70% or more Probability 
c.  Lead time minimum 3 days
d.  20% more house likely to be damage
e.  BMKG Signature also shows high likelihood and high impact  

Preparing Flood Depth Map based on Return Period

We have to have a flood depth map (continuous depth or classified by depth), that is associated by a return period. What we have from Hassan now is a 100 year return Period flood depth map around Karawang, classified by @timlinux, here: https://github.com/kartoza/fbf-project/tree/develop/Datasets/Flood/flood_classes .

We need to have this flood map ready in the database.

@lucernae generate a sample dataset for 20year since the trigger conditions Hassan proposed are for 10 or 20 years. @lucernae make some sample datasets.

I can't @timlinux , I don't have the flood model. My suggestion would be to treat the 100 year flood as 10 or 20 year. The trigger conditions that Hassan proposed is for more than 10 year, so 100 year can be used for now. If we have 10/20 year return period map, we can switch accordingly.

Fetch GloFAS information in the background, daily

There are two ways (alternatively) of getting GloFAS forecast. I will explain accordingly.

Getting forecast from reporting points

This approach is limited to area near the reporting points. We must have every reporting points location ready in the database. For each reporting points, fetch GloFAS information associated with each points. In each points we will have:

Screen Shot 2019-12-03 at 16 07 55
  1. Alert level (Each alert level were associated with return period)
  2. Forecast date (in our term, it would be acquisition date, the date where the forecast made)
  3. EPS threshold exceedance. (contains 3 numbers for each percentage of exceedance probability for each return period)

In addition to that, there is a barchart like this:

Screen Shot 2019-12-03 at 16 19 10

The barchart contains information on EPS in a certain date. We use this to get lead time information. For example, severe alert means more than 20 year return period. If we search for probability more than 25%, with forecast date (acquisition date in our term) 2th December, then it will be predicted to happen at 11th December. So, the lead time is 10 day.

So, by daily querying reporting points, we are looking at:

  1. Alert level 2 (medium) only, which is associated with 5 to 20 year return period (Hassan recommends 10, which is in this range)
  2. 50% more exceedance probability for min 10 day lead time for pre activation, 70% more exceedance probability for min 3 day lead time for activation, if the same area had reached pre-activation trigger.

By doing this, we only saves to database for Forecast Information that trigger pre-activation/activation triggers, then associate it with floodmap for a corresponding flood map (already existing in the database) with that return period and overlapping station points. Any forecast that doesn't trigger, we don't save it because it doesn't have flood map.

Flood Forecast browser in frontend should only shows forecast that is in the database, which is forecast that will trigger pre-activation or activation.

Getting Forecast From GloFAS WMS map

We can get specific prediction for each category, but it doesn't show us the probability.

For preactivation

In GloFAS there is WMS map for "Flood summary for days 11-30". We can use this to get pre activation trigger information. If a block of area is red according to the legend (5-20 return period) or purple (more than 20 return period). That means the area received pre-activation triggers. It satisfy all the above conditions for pre-activation, except the exceedance probability. We don't know what is the effective probability from this map. Overlap the area to the flood map with associated return period to get the whole flood map area.

For activation

In GloFAS there is WMS map for "Flood summary for days 4-10". We can use this to get activation trigger information. If a block of area is red according to the legend (5-20 return period) or purple (more than 20 return period). Also if the same area had received pre activation trigger in the past. That means the area received activation triggers. It satisfy all the above conditions for activation, except the exceedance probability. We don't know what is the effective probability from this map. Overlap the area to the flood map with associated return period to get the whole flood map area.

Pros/Cons for the approach

Getting forecast from reporting points

Pros:

  1. Granular exceedance probability
  2. Only lookup for whole stations points available

Cons:

  1. Data maybe difficult to parse, because it is HTML.
  2. Need python scripts in the backend to fetch the data

Getting forecast from WMS map

Pros:

  1. Color lookup can easily says which municipality area received trigger

Cons:

  1. The map doesn't say specific exceedance probability.
lucernae commented 4 years ago

Current accepted approach:

Forecast fetch

Proposed approach for impact limit evaluations:

Impact limit evaluation

Proposed trigger status evaluation:

Trigger status escalations

timlinux commented 4 years ago

Thanks @lucernae the impact limit evaluation looks great - I expect we will need to tweak this over time but hopefully that is just a matter of changing some variables in your code.

For the trigger status escalations, my understanding from Catalina is that the trigger status is based on time. So if

Is that your same view or were you thinking of different logic?

lucernae commented 4 years ago
expected arrival of flood is > 3 days and buildings are over the threshold for any village, we would have event status of pre-activation
expected arrival of flood is <= days and buildings are over the threshold for any village, we would have event status of activation

Yes, same kind of logic. The candidate of which flood event that will trigger activation thresholds were described here: https://github.com/kartoza/fbf-project/issues/73#issuecomment-561162689

For pre-activation, lead_time criteria is minimum 10 days. For activation, lead_time criteria is minimum 3 days and previously it satisfies pre-activation criteria. We don't know yet the criteria for the delay between pre-activation to activation. So our assumptions now the escalations happens in a day difference.

There are no criteria for STOP too for the moment.

Thanks @lucernae the impact limit evaluation looks great - I expect we will need to tweak this over time but hopefully that is just a matter of changing some variables in your code.

I accommodated changing the parameter of the criteria (like the limit value, lead time value, etc). It's still in the code yet (hardcode), but refactorable since it refer to the same variable. We can pass the parameter easily when we decide where to store these parameters.

For the logic itself (what to compare, when to compare, what to update), it is still a python code. I hope we can refactor it into backend modules (python modules) for easier modification later on. At this moment, it was a giant single class in a script.

timlinux commented 4 years ago

Thanks @lucernae