GNS-Science / nzshm-opensha

renamed successor to old nshm-nz-opensha , all python history removed to new nzshm-runzi repo
MIT License
1 stars 2 forks source link

doc: write up the polygonisation logic/algo #245

Closed chrisbc closed 2 years ago

chrisbc commented 2 years ago

whilst we still remember

voj commented 2 years ago

Poligonisation of spatial PDF gridded data

Inputs:

STEPS : resolution of up-sampled grid
EXPONENT
  1. Up-sample pdf grid from 10 steps per degree to STEPS steps per degree.
    • Values for the new grid are assigned by the nearest neighbour from the original grid. No new gradients are introduced.
    • The grid is then normalised to 1
    • From now on we operate on the up-sampled grid unless noted otherwise
  2. For each sub section:
    • For each grid point inside the sub section polygon.
      • polyWeight = distance_of_point_to_fault_trace / distance_of_trace_to_poly_border
        • This means that polyWeight is 1 if it's on the polygon border and 0 if on the fault trace.
        • These distances are calculated parallel to the direction of the fault dip.
      • d = polyWeight^EXPONENT
      • new_value_at_grid_point = old_value * d / sum_of_all_d_in_polygon
  3. Where a grid point is in more than one polygon, we average over all new values
  4. Grid points not in polygons are left unmodified.
  5. Down-sample to 10-step grid resolution.
    • No averaging or normalisation is performed.
    • From now on we operate on the normal grid resolution again
  6. Mmins are calculated per grid point:
    • Add up all mMin values from sub sections where the grid point is inside the polygon
      • mMin values are normalised based on how much the grid bin overlaps with the fault section polygon
voj commented 2 years ago

Updated the write-up based on feedback by Matt G and Chris DC.