kartoza / fbf-project

Project data and resources for WB Forecast Based Financing work
23 stars 15 forks source link

Data model for floods and affected areas #39

Open timlinux opened 4 years ago

timlinux commented 4 years ago

Hi @NyakudyaA @lucernae @ann26 . @gubuntu please review and add comments as needed.

So here is my proposed updates to the data model to support the kind of workflows we have discussed with Erin and Catalina:

fbf-website-r d

In this approach the data model is partly designed for fast rendering in the web UI and partly designed for the idea that flooded areas for a given return period may be computed once and then re-used multiple times. The little sticky notes in the diagram should explain things that are not evident from the diagram itself.

@lucernae for GloFAS we would work with @NyakudyaA for the sites where they monitor to pre-compute flood extent models, then populate our flood tables with the results.

The one potential gotcha with this approach is if Hassan wants to use fine grained damage curves for building vulnerability etc. which rely on absolute depth values rather than flood classes. @lucernae if you could work with him to get the methodology to make his curves work against classes that would be great. We could make the classes quite granular (e.g. 50cm interval) and also add an attribute indicating if buildings in that class are impacted or not (or calculate this on a per building basis).

lucernae commented 4 years ago

Yes @timlinux , for the GloFAS, we could prepare it beforehand and just offload it to the db. I will mainly assists Hassan on giving the Indonesian station points to calibrate GloFAS. (Also points from the field visits).

In parallel to that, from what I understand the flood-forecast might be continuous or classified.

I discussed it with @ann26 last friday that it doesn't makes sense to me to upload GloFAS predictions since it's entirely offline GIS operations to compute and offload it to the db. In other words, we probably only do it once or twice for each return period once we knew the flood extent from that hydrology models.

What makes sense to me is upload functionality is used for the flood forecast. If we use GloFAS flood forecast, then for each forecast we will have station point and probability that the station point will exceed return period in that station point. From that we correlate with flood map for that return period from the database. So the flood map is somewhat static, unless Hassan can make predictions on the flood depth, from current hydrological measurements in that forecast (discharge or water level at that stations). However, I had to note that this approach involves getting station points data from PusAIR. I only got it for Karawang-Bekasi district for now. I hope that is good enough for our goal until december.

Next possible approach to do for the flood forecast is to get CAP feeds from Signature (BMKG). This is classified flood map. If we use this for our flood forecast upload, we can just directly used it as flood map for the time being, totally unrelated with GloFAS. The CAP feeds from Signature is an impact (-based-forecast) map (as BMKG told us), but since there are no impact quantity, we might be able to use it as hazard map.