Closed sigmafelix closed 9 months ago
My original thoughts were the second option, where the workflow for all data sources is download_data() ... process_data() ... calculate_covariates()
. Although the intermediate processing step may simply be reading in the data with terra::rast()
, I think the consistency is helpful (at least for me).
Why would process_data()
and calculate_covariates()
wrapper functions not work the same as in data_download()
? The data download wrapper function is able to accept a range of arguments based on the underlying specific function, so how would the others be different?
@mitchellmanware Wrapper functions for process_*
make sense. I look forward to finding that the ongoing PR is finalized then I will work on adding a wrapper.
process_covariates
is in progress along with adding process_sedac_groads
. Will make a PR as soon as I finish adding these.
Resolved by #26 .
Old
calculate_covariates
usedpath
,sites
,id_col
as common arguments. This approach made sense as previouscalc_*
functions were a combination of processing (or importing) and calculating functions. However, the wrapper will not work as we split these into two parts, which makes me think aboutcalc(ulate)_covariates
wrapper function refactoring ideas.calc_covariates
:process_
parts inside the currentcalculate_covariates
for convenienceprocess_covariates
besidescalculate_covariates
for consistencyTo deal with data-specific arguments in a wrapper function, I found that the combined use of ellipsis argument (
...
) andrlang::inject(foo(!!!args))
is helpful for development, which is reflected in my 0.1.0 PR #13 .