Need a means of getting from Stantec Picks to Model surfaces.
Stantec provides picks as exported out of xs tool. We should also request a polygon of the update area for this particular submission of picks.
[ ] Potential feature request for XS tool --> draw update area polygon and export geom -- to add to #1
1) Merge latest pick datasets with the previous dataset (picks_recon.py)
[x] Global QA/QC of new pick dataset
[x] Duplicates check
[x] Order check
[x] na in Z field check
if new pick dataset fails these checks, review and send back to pick makers for fixing before continuing
[x] Merge the new QAed picks with previous dataset (funtion called pick_merge_check):
[x] round Z to 2 decimals to reduce rounding errors
[x] populate a pick_dict which houses:
[x] unchanged: picks that didn't change (in any field) from previous dataset
[x] base_unique: picks that are unique to the original dataset (eg if youre merging a subset to a larger dataset)
[x] new: new picks (picks where the 'PickType' has been flagged as new)
[x] modified: picks where the Z, Comment, Reviewed, Omit OR NewPick field have changed
[x] tricky because there may be reasons to carry older comments forward... manually select whether you keep old
or new in the variable explorer... shouldn't be functionized, someone needs to review for this step
[x] group these pick types together to develop a new merged dataset
2) Local QA Process (@gmerritt123 )
Need to locally QA picks a number of ways:
[x] ID locations that don't have control point pinchouts and review
[x] ID locations that have grades >20% between their nearest neighbours
[x] get_profile_data from existing model at these locations
[x] Consolidate with model nodes:
sys_loc_code
Reviewed
PickZ
MdlZ
Comment
Formation
MdlSlice
XY
WellName
920005A
1
352
355
I Picked This
ATB2
5
54,32
WT-OW1
920005A
1
Nan
352
AFB2
6
54,32
WT-OW1
2695421
0
356
356
NodeValue
GS
1
60,35
[x] Routine to calculate thicknesses, with "intentional" Nan returns where no explicit thickness is defined across a unit
[x] Format Reviewed field according to code
[x] Missing picks check (check that every layer is picked at locations where picks have been made)
3) Kriging dataset prep (PickQA.py and QGIS)
PickQA.py:
[x] Reads in csv of picks and concave hull around new/modified picks
[x] Reads in model, pulls existing nodes within update area (shapefile made in QGIS)
[x] Translate csv to shp or gdf and reproject etc.
[x] Filter picks based on being within update polygon
[x] Obtain Control Points from existing model using concave hull+buffer
[x] Export of update hull
QGIS:
[x] read in the merged pick dataset filtered down to the update area in PickQA.py and split into model layers (export result into shapefiles)
For reviewing all our flags and changes to model we need:
[x] Import of model nodal elevations in the update area and make tins of the elevations and thicknesses for each fm
[x] "Elevation" Map
[x] Basic Basemaps
[x] Model Node "Border" control points
[x] Pick Data with themes for:
[x] Location with picks missing a pick for the currently viewed layer (Make it really big)
[x] Whether Location is reviewed/not reviewed (Shape)
[x] Label for Pick and Existing Model Elev
[x] Color --> based on difference between model existing elev and picked elev
[x] something to show who/when picked
[x] "Thickness" Map
[x] Same theming/Basemaps as above
[x] Except color, which gets a bit more complicated:
[ ] Color based on pick thickness vs existing model thickness
[x] Ability to make edits on pick data - we can edit our picks in QGIS (and flag that we edited them) and read these shapefiles into the kriging scrip
[x] Ability to add new records in QGIS
4) Kriging/Interpolation Routine @jlangford92
Note: punctual kriging>universal krieging or nearest neighbour
Kriging Routine
[ ] Extra piece --> use DataExtraction fns to pull type 1 BCs and flag nodes in upd area that are Type 1 BCs
[x] Dictionary to map kriging params to each slice (sill, range, account for drift etc)
[x] Perform kriging and map to nodes
[x] Variogram plots
Constraint Logic
[x] Function to generate "weighting raster" for "blending in" the DEM (for ground surface and for HGUs; completed in PickQA.py and a weight is mapped to the nodes)
Negative buffer the hull back to the "true" concave hull --> obtain line geom of that, and segmentize to high res, assign "weight" of 1 (meaning this area will get new DEM fully weighted)
Need a means of getting from Stantec Picks to Model surfaces.
Stantec provides picks as exported out of xs tool. We should also request a polygon of the update area for this particular submission of picks.
1) Merge latest pick datasets with the previous dataset (picks_recon.py)
2) Local QA Process (@gmerritt123 )
Need to locally QA picks a number of ways:
3) Kriging dataset prep (PickQA.py and QGIS)
PickQA.py:
QGIS:
4) Kriging/Interpolation Routine @jlangford92
Kriging Routine
Constraint Logic
[x] Function to generate "weighting raster" for "blending in" the DEM (for ground surface and for HGUs; completed in PickQA.py and a weight is mapped to the nodes)
Negative buffer the hull back to the "true" concave hull --> obtain line geom of that, and segmentize to high res, assign "weight" of 1 (meaning this area will get new DEM fully weighted)
Segmentize the original hull, assign "weight" of 0 (meaning this area will be fully old DEM weighted) (look at TargetLineSegmentize in https://github.com/jlangford92/GeoSpatialTools/blob/GM_Dump/GeospatialTools.py)
Interpolate (method TBD) the model nodes in between these two a weight value
Goal is to assign a "weight" to every XY nodal location within the buffered update area hull (Inside inner buffer always = 1)
[ ] Transfer height of new DEM to all nodes in the full update area (JL needs big DEM, GM to provide height transfer script)
[ ] Apply weighted average to Z
[ ] Constrain model layers to this "new" DEM