Open alessandrofelder opened 3 years ago
My best interpretation of page 148 of the GEBCO cookbook makes me think that the source grid data should be combined with the base grid data into one set of x,y,z
points and then undergo
blockmedian
(at base grid resolution, I guess)surface
grdfilter
or grdblend
(cosine tapering)grdsample
In other words, are we missing steps B and C of that workflow in our implementation?
The GEBCO data has already been through steps A-C so the base grid we actually put into our remove-restore alg is the output from some previous step C, but then we do resample the grid to even finer resolutions than 500m, so I'm intrigued by that optional grdfilter step and why they choose the specific "cosine arch" with parameter 6000m. Keep in mind, they're just using remove-restore to create the GEBCO data from their data sources, I don't think every part of their implementation necessarily applies to cascadia.
Agreed... unless the idea of the algorithm is that the update grid data must be included in those steps before doing the update. I'm not entirely certain that this is the idea, I admit, but
Does that make sense?
To decide the best way forward, we'll test various approaches, including at least
When implementing workflow steps B+C we should try keeping the ratios the same as in the GEBCO cookbook... i.e. the grid is B should have resolution 4spacing
and the cosine tapering step width 12spacing
.
I am testing this with
remove-restore --base gebco_2020_n54.0_s43.0_w-130.0_e-117.0.nc --output=vancouver-island-no-smoothing.nc --plot --spacing 0.00028 NOAA_South_VancouverIsland_30m.tif --region_of_interest -123.4 -123 47.8 48.0
This ppt by Pauline Weatherall suggests so but I don't understand how that would work in detail.
@Devaraj-G @JamieJQuinn what do you think? I am tempted to email Pauline and ask for help at this point.