Best use of our shared server (now used for provincial fish passage modelling) and our time to build reports (currently over 1 minute for intro and backgrounds section alone - these reports are usually built hundreds of times before they are out of draft) is to do long running queries like generating upstream watershed boundaries (especially huge ones) only when necessary.
Best use of our shared server (now used for provincial fish passage modelling) and our time to build reports (currently over 1 minute for intro and backgrounds section alone - these reports are usually built hundreds of times before they are out of draft) is to do long running queries like generating upstream watershed boundaries (especially huge ones) only when necessary.
See https://github.com/NewGraphEnvironment/restoration_wedzin_kwa_2024/blob/b6b968fb39c9cdf81b914f74298d5144faf065f6/index.Rmd#L62
Let's move calls like
fwapgr::fwa_watershed_at_measure(356362759)
(area of 47,268.6 km2!!) to chunks that can be updated if necessary and store the result in thefishpass_mapping.gpkg
or thebcfishpass.sqlite
(https://github.com/NewGraphEnvironment/fish_passage_peace_2022_reporting/tree/main/data/fishpass_mapping)