Freshwater-Initiative / SkagitLandslideHazards

Seattle City Light is interested in improving understanding of landslide hazard and sediment transport to ensure reliable and cost-effective hydropower generation.
4 stars 4 forks source link

Landlab Landslide Model Testing #31

Open ChristinaB opened 4 years ago

ChristinaB commented 4 years ago
  1. use code in .py to create mean and std from a netcdf and save the pickle

  2. test the code inside 20191206_netcdf_DataDriven_spatial_Depth_Synthetic_LandlabLandslide.ipynb save the output

  3. upload the pickle into the lognormal spatial synthetic notebook. and input the pickle instead of the random input data

RondaStrauch commented 4 years ago

What I was thinking of task is to:

1) covert and test the mean and stdev. from Nicoleta's Step_2 notebook from a xarray to a vector or bumpy array that can be exported/saved as a text so that we can use directly in a lognormal notebook similar to (20191207_netcdf_Lognormal_spatial_Depth_Synthetic_LandlabLandslide.ipynb), line 13, but real data. We just need to make sure a flattened matrix started with first value representing grid 0, bottom left corner. Thus, we need to test this. The 'chunk' function, may do this already and we just need to bring that into Nicoleta's Step_2 notebook.

image
RondaStrauch commented 4 years ago

Think we could do this with an array to flatten from the bottom left up to top right:

array=np.array([[6, 7, 8], [3, 4, 5], [0, 1, 2]]) fliparray = np.flip(array, axis=0) [so only y axis flips] landlabarray =fliparray.flatten() [flattens by rows] type(landlabarray) [should be np array landlabarray

[0 1 2 3 4 5 6 7 8]

ChristinaB commented 4 years ago

This is the file to continue coding.
20191206_netcdf_DataDriven_spatial_Depth_SCL_LandlabLandslide or 20191206_netcdf_DataDriven_spatial_Depth_Synthetic_LandlabLandslide.ipynb

ChristinaB commented 4 years ago

March 20, 2020 Update Update 1) I have two folders of dates in this Slippery Future Data HS resource: this is the link to the Skagit dates.
2) I finished postprocessing the Skagit historic model but its a huge model folder getting zipped. still running...is there any other processing we need to do? Streamflow? Trying to not get distracted but want to check.

3) Landlab input dictionary: Nicoleta's code was:

calculate mean of all grid cells

mean_dtw_scl = data.wt.mean("time")

from here I sorted out these four outputs that are arrays the length of the nodes. I'm running it on both her netcdf built from the ascii AND the flipped version of my netcdf, and we can compare. [ one line with xarray or 100 lines with me hacking it...let's go with xarray]. The second block is how we will build the dictionary for the lognormal forcing. Look good? We can query and compare the two to be sure it gets flattened to an array as expected.

4) Planning for figures and data management

5) Landlab component updates

ChristinaB commented 4 years ago

Notebook 1.0 performs the following Hydrologic (DHSVM data) processing functions to create Landlab model inputs with Visualizations that illustrate Methods:

Use next Slippery Future Paper (Notebook 2.0) to Process Multiple Models:

Version 1: multiple hyd. models with various climate forcings and lognormal spatial landslides for SCL domain

Landlab Landslide (Notebook 3.0) for running multiple Landlab landslide model instances (uniform, lognormal, lognormal-spatial, data-driven spatial)


Notebook 3.1 Synthetic domain run historic climate data with four Landslide models

Notebook 3.2 SCL domain Lognormal Spatial Climate Forcing Comparison (1-7 model instances): This Jupyter Notebook runs the Landlab LandslideProbability component on a Seattle City Light Landlab grid using four depth to water table options to replace recharge options described in the paper:

Use next Slippery Future Paper (Notebook 4.0) to Visualize Hydro + Landslides: