Open heyairf opened 1 month ago
Thanks for submitting to rOpenSci, our editors and @ropensci-review-bot will reply soon. Type @ropensci-review-bot help
for help.
:rocket:
Editor check started
:wave:
git hash: aacfb397
(Checks marked with :eyes: may be optionally addressed.)
Package License: GPL (>= 3)
The table below tallies all function calls to all packages ('ncalls'), both internal (r-base + recommended, along with the package itself), and external (imported and suggested packages). 'NA' values indicate packages to which no identified calls to R functions could be found. Note that these results are generated by an automated code-tagging system which may not be entirely accurate.
|type |package | ncalls|
|:----------|:-------------|------:|
|internal |base | 147|
|internal |fireexposuR | 14|
|internal |grid | 12|
|internal |stats | 10|
|internal |utils | 6|
|internal |graphics | 4|
|internal |grDevices | 2|
|imports |magrittr | 95|
|imports |dplyr | 49|
|imports |ggplot2 | 49|
|imports |terra | 45|
|imports |tidyterra | 15|
|imports |geosphere | 8|
|imports |maptiles | 6|
|imports |ggspatial | 4|
|imports |tidyr | 3|
|imports |MultiscaleDTM | 2|
|imports |rlang | NA|
|imports |tidyselect | NA|
|suggests |knitr | NA|
|suggests |rmarkdown | NA|
|suggests |testthat | NA|
|linking_to |NA | NA|
Click below for tallies of functions used in each package. Locations of each call within this package may be generated locally by running 's <- pkgstats::pkgstats(
c (27), exp (19), sum (11), cbind (9), data.frame (9), as.data.frame (8), paste (6), ifelse (5), by (4), levels (4), matrix (4), ncol (4), all (3), factor (3), labels (3), rep (3), seq (3), class (2), match (2), names (2), round (2), unique (2), as.factor (1), for (1), if (1), length (1), max (1), mean (1), missing (1), outer (1), plot (1), rbind (1), scale (1), table (1)
%>% (95)
mutate (43), count (4), case_when (1), summarise (1)
element_blank (9), ggplot (7), aes (6), element_text (5), coord_sf (4), geom_col (3), labs (3), element_rect (2), expansion (2), scale_fill_manual (2), geom_bar (1), geom_hline (1), geom_vline (1), scale_x_discrete (1), scale_y_continuous (1), theme (1)
classify (7), crop (6), focal (4), mask (4), project (4), vect (4), extract (3), res (3), rescale (3), spatSample (2), as.polygons (1), buffer (1), crs (1), merge (1), perim (1)
geom_spatraster_rgb (6), geom_spatvector (3), rename (2), filter (1), geom_spatraster (1), scale_fill_whitebox_c (1), whitebox.colors (1)
exposure (7), direxp (2), adjustexp (1), extractexp (1), mapexpclass (1), mapexpcont (1), multidirexp (1)
unit (12)
window (6), df (4)
destPoint (8)
get_credit (3), get_tiles (3)
data (6)
annotation_scale (4)
title (4)
pivot_wider (2), pivot_longer (1)
palette (2)
annulus_window (2)
base
magrittr
dplyr
ggplot2
terra
tidyterra
fireexposuR
grid
stats
geosphere
maptiles
utils
ggspatial
graphics
tidyr
grDevices
MultiscaleDTM
This package features some noteworthy statistical properties which may need to be clarified by a handling editor prior to progressing.
The package has: - code in R (100% in 11 files) and - 2 authors - 1 vignette - no internal data file - 12 imported packages - 9 exported functions (median 52 lines of code) - 9 non-exported functions in R (median 76 lines of code) --- Statistical properties of package structure as distributional percentiles in relation to all current CRAN packages The following terminology is used: - `loc` = "Lines of Code" - `fn` = "function" - `exp`/`not_exp` = exported / not exported All parameters are explained as tooltips in the locally-rendered HTML version of this report generated by [the `checks_to_markdown()` function](https://docs.ropensci.org/pkgcheck/reference/checks_to_markdown.html) The final measure (`fn_call_network_size`) is the total number of calls between functions (in R), or more abstract relationships between code objects in other languages. Values are flagged as "noteworthy" when they lie in the upper or lower 5th percentile. |measure | value| percentile|noteworthy | |:------------------------|-----:|----------:|:----------| |files_R | 11| 60.1| | |files_vignettes | 1| 62.0| | |files_tests | 10| 87.6| | |loc_R | 808| 59.6| | |loc_vignettes | 114| 28.0| | |loc_tests | 289| 60.1| | |num_vignettes | 1| 59.0| | |n_fns_r | 18| 25.4| | |n_fns_r_exported | 9| 42.3| | |n_fns_r_not_exported | 9| 20.6| | |n_fns_per_file_r | 1| 1.9|TRUE | |num_params_per_fn | 4| 51.1| | |loc_per_fn_r | 60| 92.4| | |loc_per_fn_r_exp | 52| 80.4| | |loc_per_fn_r_not_exp | 76| 95.3|TRUE | |rel_whitespace_R | 11| 47.2| | |rel_whitespace_vignettes | 32| 25.7| | |rel_whitespace_tests | 20| 58.5| | |doclines_per_fn_exp | 50| 62.7| | |doclines_per_fn_not_exp | 0| 0.0|TRUE | |fn_call_network_size | 1| 11.5| | ---
Click to see the interactive network visualisation of calls between objects in package
goodpractice
and other checks#### 3a. Continuous Integration Badges [![R-CMD-check.yaml](https://github.com/heyairf/fireexposuR/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/heyairf/fireexposuR/actions) **GitHub Workflow Results** | id|name |conclusion |sha | run_number|date | |-----------:|:----------------|:----------|:------|----------:|:----------| | 11037129203|pkgcheck |success |aacfb3 | 7|2024-09-25 | | 11037129172|R-CMD-check.yaml |success |aacfb3 | 7|2024-09-25 | --- #### 3b. `goodpractice` results #### `R CMD check` with [rcmdcheck](https://r-lib.github.io/rcmdcheck/) rcmdcheck found no errors, warnings, or notes #### Test coverage with [covr](https://covr.r-lib.org/) Package coverage: 93.87 #### Cyclocomplexity with [cyclocomp](https://github.com/MangoTheCat/cyclocomp) The following function have cyclocomplexity >= 15: function | cyclocomplexity --- | --- extractexp | 17 #### Static code analyses with [lintr](https://github.com/jimhester/lintr) [lintr](https://github.com/jimhester/lintr) found the following 10 potential issues: message | number of times --- | --- Avoid 1:length(...) expressions, use seq_len. | 1 Avoid library() and require() calls in packages | 3 Lines should not be more than 80 characters. This line is 83 characters. | 4 Lines should not be more than 80 characters. This line is 84 characters. | 1 Lines should not be more than 80 characters. This line is 85 characters. | 1
:heavy_multiplication_x: The following function name is duplicated in other packages: - - `exposure` from mds, mlxR, netdiffuseR, RsSimulx
|package |version | |:--------|:--------| |pkgstats |0.1.6.17 | |pkgcheck |0.1.2.58 |
This package is in top shape and may be passed on to a handling editor
@ropensci-review-bot check package
Thanks, about to send the query.
:rocket:
Editor check started
:wave:
git hash: aacfb397
(Checks marked with :eyes: may be optionally addressed.)
Package License: GPL (>= 3)
The table below tallies all function calls to all packages ('ncalls'), both internal (r-base + recommended, along with the package itself), and external (imported and suggested packages). 'NA' values indicate packages to which no identified calls to R functions could be found. Note that these results are generated by an automated code-tagging system which may not be entirely accurate.
|type |package | ncalls|
|:----------|:-------------|------:|
|internal |base | 147|
|internal |fireexposuR | 14|
|internal |grid | 12|
|internal |stats | 10|
|internal |utils | 6|
|internal |graphics | 4|
|internal |grDevices | 2|
|imports |magrittr | 95|
|imports |dplyr | 49|
|imports |ggplot2 | 49|
|imports |terra | 45|
|imports |tidyterra | 15|
|imports |geosphere | 8|
|imports |maptiles | 6|
|imports |ggspatial | 4|
|imports |tidyr | 3|
|imports |MultiscaleDTM | 2|
|imports |rlang | NA|
|imports |tidyselect | NA|
|suggests |knitr | NA|
|suggests |rmarkdown | NA|
|suggests |testthat | NA|
|linking_to |NA | NA|
Click below for tallies of functions used in each package. Locations of each call within this package may be generated locally by running 's <- pkgstats::pkgstats(
c (27), exp (19), sum (11), cbind (9), data.frame (9), as.data.frame (8), paste (6), ifelse (5), by (4), levels (4), matrix (4), ncol (4), all (3), factor (3), labels (3), rep (3), seq (3), class (2), match (2), names (2), round (2), unique (2), as.factor (1), for (1), if (1), length (1), max (1), mean (1), missing (1), outer (1), plot (1), rbind (1), scale (1), table (1)
%>% (95)
mutate (43), count (4), case_when (1), summarise (1)
element_blank (9), ggplot (7), aes (6), element_text (5), coord_sf (4), geom_col (3), labs (3), element_rect (2), expansion (2), scale_fill_manual (2), geom_bar (1), geom_hline (1), geom_vline (1), scale_x_discrete (1), scale_y_continuous (1), theme (1)
classify (7), crop (6), focal (4), mask (4), project (4), vect (4), extract (3), res (3), rescale (3), spatSample (2), as.polygons (1), buffer (1), crs (1), merge (1), perim (1)
geom_spatraster_rgb (6), geom_spatvector (3), rename (2), filter (1), geom_spatraster (1), scale_fill_whitebox_c (1), whitebox.colors (1)
exposure (7), direxp (2), adjustexp (1), extractexp (1), mapexpclass (1), mapexpcont (1), multidirexp (1)
unit (12)
window (6), df (4)
destPoint (8)
get_credit (3), get_tiles (3)
data (6)
annotation_scale (4)
title (4)
pivot_wider (2), pivot_longer (1)
palette (2)
annulus_window (2)
base
magrittr
dplyr
ggplot2
terra
tidyterra
fireexposuR
grid
stats
geosphere
maptiles
utils
ggspatial
graphics
tidyr
grDevices
MultiscaleDTM
This package features some noteworthy statistical properties which may need to be clarified by a handling editor prior to progressing.
The package has: - code in R (100% in 11 files) and - 2 authors - 1 vignette - no internal data file - 12 imported packages - 9 exported functions (median 52 lines of code) - 9 non-exported functions in R (median 76 lines of code) --- Statistical properties of package structure as distributional percentiles in relation to all current CRAN packages The following terminology is used: - `loc` = "Lines of Code" - `fn` = "function" - `exp`/`not_exp` = exported / not exported All parameters are explained as tooltips in the locally-rendered HTML version of this report generated by [the `checks_to_markdown()` function](https://docs.ropensci.org/pkgcheck/reference/checks_to_markdown.html) The final measure (`fn_call_network_size`) is the total number of calls between functions (in R), or more abstract relationships between code objects in other languages. Values are flagged as "noteworthy" when they lie in the upper or lower 5th percentile. |measure | value| percentile|noteworthy | |:------------------------|-----:|----------:|:----------| |files_R | 11| 60.1| | |files_vignettes | 1| 62.0| | |files_tests | 10| 87.6| | |loc_R | 808| 59.6| | |loc_vignettes | 114| 28.0| | |loc_tests | 289| 60.1| | |num_vignettes | 1| 59.0| | |n_fns_r | 18| 25.4| | |n_fns_r_exported | 9| 42.3| | |n_fns_r_not_exported | 9| 20.6| | |n_fns_per_file_r | 1| 1.9|TRUE | |num_params_per_fn | 4| 51.1| | |loc_per_fn_r | 60| 92.4| | |loc_per_fn_r_exp | 52| 80.4| | |loc_per_fn_r_not_exp | 76| 95.3|TRUE | |rel_whitespace_R | 11| 47.2| | |rel_whitespace_vignettes | 32| 25.7| | |rel_whitespace_tests | 20| 58.5| | |doclines_per_fn_exp | 50| 62.7| | |doclines_per_fn_not_exp | 0| 0.0|TRUE | |fn_call_network_size | 1| 11.5| | ---
Click to see the interactive network visualisation of calls between objects in package
goodpractice
and other checks#### 3a. Continuous Integration Badges [![R-CMD-check.yaml](https://github.com/heyairf/fireexposuR/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/heyairf/fireexposuR/actions) **GitHub Workflow Results** | id|name |conclusion |sha | run_number|date | |-----------:|:----------------|:----------|:------|----------:|:----------| | 11037129203|pkgcheck |success |aacfb3 | 7|2024-09-25 | | 11037129172|R-CMD-check.yaml |success |aacfb3 | 7|2024-09-25 | --- #### 3b. `goodpractice` results #### `R CMD check` with [rcmdcheck](https://r-lib.github.io/rcmdcheck/) rcmdcheck found no errors, warnings, or notes #### Test coverage with [covr](https://covr.r-lib.org/) Package coverage: 93.87 #### Cyclocomplexity with [cyclocomp](https://github.com/MangoTheCat/cyclocomp) The following function have cyclocomplexity >= 15: function | cyclocomplexity --- | --- extractexp | 17 #### Static code analyses with [lintr](https://github.com/jimhester/lintr) [lintr](https://github.com/jimhester/lintr) found the following 10 potential issues: message | number of times --- | --- Avoid 1:length(...) expressions, use seq_len. | 1 Avoid library() and require() calls in packages | 3 Lines should not be more than 80 characters. This line is 83 characters. | 4 Lines should not be more than 80 characters. This line is 84 characters. | 1 Lines should not be more than 80 characters. This line is 85 characters. | 1
:heavy_multiplication_x: The following function name is duplicated in other packages: - - `exposure` from mds, mlxR, netdiffuseR, RsSimulx
|package |version | |:--------|:--------| |pkgstats |0.1.6.17 | |pkgcheck |0.1.2.58 |
This package is in top shape and may be passed on to a handling editor
Hi @heyairf, local checks indicate that you may not be properly ignoring the codemeta.json file in .Rbuildignore. Can you check on that for me, please?
Hi @adamhsparks , I think that is now fixed?
Edit: Nevermind. Standby.
Thank you! I am still learning a lot about R and github.
@ropensci-review-bot assign @maurolepore as reviewer
Can't assign reviewer because there is no editor assigned for this submission yet
@ropensci-review-bot assign @maurolepore as editor
Assigned! @maurolepore is now the editor
Hi @heyairf, thanks a lot for your submission. It's my pleasure to be the handling editor.
To help you track my comments I'll tag them with "ml" and a numbered sequence, e.g. ml01, ml02, and so on. Comments following bullets are for you to consider -- you may or may not respond to them. Comments following check-boxes are requests for some action -- please respond.
I'll post my checks in a few days. I already explored the package and looks in good shape. I appreciate your kindness in sharing your work with the rOpenSci community, and your willingness to learn a bit more about R package development. In that spirit I'll share a number of suggestions to make the reviews as smooth as possible.
Hi @maurolepore, I am very much looking forward to the review process and opportunity to learn. Please note that the world of R package development is very new to me so my reviewer suggestions will reflect that. The following suggestions are people that participate in wildfire research projects, are proficient in R, and are, at varying levels, familiar with the methods that are in this package.
Hi @heyairf sorry for the delay. I'm close to finishing polishing the notes from my checks. I expect to finish it by Wed-Thu at the latest.
Dear @heyairf,
Once again, thank for sharing your great work with the rOpenSci community.
Here are my checks and comments (based on the editor's template). Remember the items in bullets are just for your consideration, and the check-boxes require some action (sometimes just an explanation). The more items you can address before review the better. It will reduce the chance that reviewers will pick up in these relatively uninteresting structural issues and free them to focus on the more interesting aspects of the package.
Please feel free to ask for clarifications or help. If I don't hear from you before, I'll touch base in a couple of weeks.
[ ] Documentation: The package has sufficient documentation available online (README, pkgdown docs) to allow for an assessment of functionality and scope without installing the package. In particular,
[x] Installation instructions: Are installation instructions clear enough for human users?
[ ] Tests: If the package has some interactivity / HTTP / plot production etc. are the tests using state-of-the-art tooling?
[x] Contributing information: Is the documentation for contribution clear enough e.g. tokens for tests, playgrounds?
[x] License: The package has a CRAN or OSI accepted license.
[ ] Project management: Are the issue and PR trackers in a good shape, e.g. are there outstanding bugs, is it clear when feature requests are meant to be tackled?
"an accompanying paper details suggestions for data acquisition and preparation in accordance with various budget limitations and user experience levels."
We recommend creating a documentation website for your package using pkgdown. -- https://devdevguide.netlify.app/pkg_building.html#website
It sounds hard but all you need to do is to run usethis::use_pkgdown_github_pages()
on the console:
use_pkgdown_github_pages()
: implements the GitHub setup needed to automatically publish your pkgdown site to GitHub pages: -- https://usethis.r-lib.org/reference/use_pkgdown.html
This allows "for an assessment of functionality and scope without installing the package", which should help the reviewers and users.
When your package has many functions, use grouping in the reference, which you can do more or less automatically. https://devdevguide.netlify.app/pkg_building.html#function-grouping
More importantly, it would be nice to use references to "exposure" consistently in the names of the functions. Right now I see three ways to refer to exposure:
Consider using a prefix rather than the alternatives. That way users can type the prefix and see all related functions thanks to auto-complete.
Also "We strongly recommend snake_case
over all other styles unless you are porting over a package that is already in wide use". For example, adjust_exp()
(or better even exp_adjust()
) instead of adjustexp()
(or expadjust()
).
See Function and argument naming at https://devdevguide.netlify.app/pkg_building.html#function-and-argument-naming
The tests could improve in a number of ways. I don't have first-had experience testing specifically spatial data so I'll try to find a reviewer who has (thanks for being proactive in https://discuss.ropensci.org/t/s4-data-for-testthat-and-examples/4007/5). For now I'll share some feedback that applies to tests in general.
These two test-files are the slowest:
✔ | 20 | direxp [18.0s]
✔ | 7 | multidirexp [87.1s]
If you trully can't make tests run faster, then you may move them to a dedicated folder e.g. tests/testthat/slow and run those tests less frequently than every other test -- with testthat::test_dir("tests/testthat/slow")
(or testthat::test_dir(testthat::test_path("slow"))
).
References: https://testthat.r-lib.org/reference/test_dir.html and https://testthat.r-lib.org/reference/test_path.html
expect_warning()
), but in those cases the expectations typically swallow the verbose output so that the test-report remains decluttered. Here are the verbose functions and the output I see clutering the test-report:
adjustexp:
Proceed with caution: any adjustments to transmission distances should be further validated with observed fire history
direxp:
<SpatRaster> resampled to 500554 cells.
Value object provided has more than one feature, only the first point or polygon will be used.
mapexpcont:
<SpatRaster> resampled to 501264 cells.
See https://ropensci.org/blog/2024/02/06/verbosity-control-packages/
Specifially, I see this code-chunk multiple times:
# test data ====================================================================
set.seed(0)
e <- c(45,55,495,505) * 10000
r <- terra::rast(resolution = 100, extent = terra::ext(e))
terra::values(r) <- sample(c(0,1), terra::ncell(r), replace = TRUE)
r <- terra::sieve(r, threshold = 50, directions = 4)
haz <- terra::sieve(r, threshold = 500, directions = 4)
expect_error(f(), "calculation failed")
is better than expect_error(f())
. Here is one example from the package:
expect_error(direxp(2, pt))
Note that the second argument to
expect_error()
is a regular expression — the goal is to find a short fragment of text that matches the error you expect and is unlikely to match errors that you don’t expect. -- https://mastering-shiny.org/scaling-testing.html?q=expect_error#key-expectations
expect_message()
, expect_warning()
rather than the catch-all `expect_condition(). regexp
argument to ensure you get the specific contition you want to test. expect_condition()
seems like a mistake? test_that("mapexpclass() input checks and function messages work", {
expect_condition(mapexplass(2, "loc", aoicrs)) # stopifnot() L67
expect_condition(mapexpclass(expvals, "loc", aoicrs)) # stopifnot() L69
expect_condition(mapexpclass(exp, "loc", 2)) # stopifnot() L71
expect_condition(mapexpclass(exp, "loc", aoibig)) # stopifnot() L73
expect_condition(mapexpclass(exp, "blah", aoicrs)) # matchargs() L75
expect_condition()
})
If a function is type-stable ... you can predict the output type based only on the input types (not their values). -- https://design.tidyverse.org/out-type-stability.html
The code chunk below shows that multidirexp()
is not type-stable. To predict whether the output is a "data.frame" or a "ggplot" we need to know the value of the argument plot
. Consider creating separate functions for returning a "data.frame" and a "ggplot". For example:
Option 1: data |> multidirexp() |> plot_multidirexp()
Option 2: data |> multidirexp() |> plot()
or data |> multidirexp() |> ggplot2::autoplot()
. This requres understanding a little of the S3 object-oriented system (https://adv-r.hadley.nz/s3.html). Here are some examples using autoplot: https://forestgeo.github.io/fgeo.plot/reference/index.html
# https://github.com/heyairf/fireexposuR/blob/main/tests/testthat/test-multidirexp.R
test_that("multidirexp() returns object with correct class", {
expect_s3_class(multidirexp(exp, pts, plot = T), "ggplot")
expect_s3_class(multidirexp(exp, pts), "data.frame")
})
test_that("summexp() input checks work", {
expect_condition(summexp(2)) #exp: right class
expect_condition(summexp(exp, 2)) #aoi: right class
expect_condition(summexp(exp, aoi, "blah")) # classify: match arg
})
# The fireexposuR directory is quite large, so cloning is relatively slow
➜ git du -hd 1 fireexposuR
240K fireexposuR/.Rproj.user
80K fireexposuR/R
20K fireexposuR/inst
76M fireexposuR/.git
16K fireexposuR/vignettes
24K fireexposuR/.github
736K fireexposuR/man
48K fireexposuR/tests
77M fireexposuR
# Almost all that "weight" comes from the .git/ directory alone
➜ git du -hd 1 fireexposuR/.git
4.0K fireexposuR/.git/branches
75M fireexposuR/.git/objects
32K fireexposuR/.git/logs
8.0K fireexposuR/.git/info
28K fireexposuR/.git/refs
64K fireexposuR/.git/hooks
76M fireexposuR/.git
@ropensci-bot
check1:length(...)
that is best replaced with a call to seq_len()
.Next, beware of iterating over 1:length(x), which will fail in unhelpful ways if x has length 0. -- https://adv-r.hadley.nz/control-flow.html?q=1:leng#common-pitfalls
Hi @maurolepore,
Thank you so much for such detailed feedback! I will work on addressing these comments over the next few days and ask for further clarification if I need to. I very much appreciate all the resources you've included.
Hi @maurolepore,
I have attempted to addressed your feedback within my package! My replies to each of your comments are below. I expect that some may need to be addressed further, please let me know.
ml01
ml02
ml03
ml04/ml08
ml05
ml06
ml07
ml08
ml09/ml10
ml11
fire_exp_dir()
now returns a spatial feature, that can then be used in fire_exp_dir_plot()
which has the option to return a plot or a map. ml12
ml13
ml14
@ropensci-review-bot check package
Thanks, about to send the query.
@heyairf thanks so much for your effort and your quick response. Great work.
I'll start reaching out to potential reviewers.
Also ...
Warning (test-fire_exp_extract_vis.R:43:3): fire_exp_extract_vis() runs when input conditions are met
Use of .data in tidyselect expressions was deprecated in tidyselect 1.2.0.
i Please use `"class"` instead of `.data$class`
The problem is in R/fire_exp_extract_vis.R#L137. You can fix using use "class"
instead of .data$class
.
tidyr::drop_na("class")
Sorry @maurolepore and @heyairf , the check package service is currently not working. It'll be back online in about 12 hours, at which time I'll make sure that request gets processed. Thanks
@heyairf
ml13. I already discussed with the editorial board and we agree in that removing the large files would be beneficial, particularly before we engage the reviewers.
Here are some tools you may consider:
Understanding the goal and knowing about a couple of tools should help you find examples and better support. If that's insufficient please let me know and I'm happy to help.
Remember to backup your original repo and play with copies of it until you're comfortable with the result. If
BTW
@mpadge kindly shared a script to detect the largest files in a repository
On fireexposuR I get this output:
$ git rev-list main | while read rev; do git ls-tree -lr $rev | cut -c54- | sed -r 's/^ +//g;'; done | sort -u | perl -e 'while (<>) { chomp; @stuff=split("\t");$sums{$stuff[1]} += $stuff[0];} print "$sums{$_} $_\n" for (keys %sums);' | sort -rn | head -n 20
71482479 inst/extdata/LExpAB2020.tif
8969108 inst/extdata/LHazAB2020.tif
4976880 inst/extdata/nonburnableAB2020.tif
861294 inst/extdata/forR.gdb/a0000000c.gdbtable
703342 man/figures/README-maplandscape-1.png
680608 inst/extdata/fpa.shp
658836 inst/extdata/WHITstruc.shp
645400 inst/extdata/WHITstruc.dbf
555028 inst/extdata/forR.gdb/a00000010.gdbtable
474724 inst/extdata/forR.gdb/a00000014.gdbtable
360794 inst/extdata/forR.gdb/a00000004.gdbtable
213432 inst/extdata/hazard.tif
204820 inst/extdata/WHITbuilt.shp
163292 inst/extdata/forestarea.shp
162201 R/direxp.R
159766 inst/extdata/forR.gdb/a00000010.spx
155670 inst/extdata/forR.gdb/a00000004.spx
142837 man/figures/README-aoivis-1.png
135190 inst/extdata/forR.gdb/a00000014.spx
109006 man/figures/README-mapdir-1.png
@ropensci-review-bot check package
Thanks, about to send the query.
git hash: 01b6686c
Package License: GPL (>= 3)
The table below tallies all function calls to all packages ('ncalls'), both internal (r-base + recommended, along with the package itself), and external (imported and suggested packages). 'NA' values indicate packages to which no identified calls to R functions could be found. Note that these results are generated by an automated code-tagging system which may not be entirely accurate.
|type |package | ncalls|
|:----------|:-------------|------:|
|internal |base | 150|
|internal |grid | 12|
|internal |stats | 12|
|internal |fireexposuR | 8|
|internal |utils | 8|
|internal |graphics | 5|
|internal |grDevices | 4|
|imports |magrittr | 100|
|imports |ggplot2 | 53|
|imports |dplyr | 51|
|imports |terra | 47|
|imports |tidyterra | 17|
|imports |geosphere | 8|
|imports |maptiles | 8|
|imports |tidyr | 5|
|imports |ggspatial | 4|
|imports |MultiscaleDTM | 2|
|imports |rlang | NA|
|imports |tidyselect | NA|
|suggests |knitr | NA|
|suggests |rmarkdown | NA|
|suggests |testthat | NA|
|linking_to |NA | NA|
Click below for tallies of functions used in each package. Locations of each call within this package may be generated locally by running 's <- pkgstats::pkgstats(
c (25), exp (20), sum (11), cbind (9), data.frame (9), as.data.frame (8), paste (7), ifelse (5), by (4), levels (4), matrix (4), ncol (4), factor (3), labels (3), rep (3), seq (3), all (2), class (2), match (2), max (2), mean (2), names (2), round (2), unique (2), as.factor (1), drop (1), for (1), if (1), missing (1), nrow (1), outer (1), rbind (1), scale (1), seq_len (1), t (1), table (1)
%>% (100)
element_blank (9), aes (8), ggplot (8), element_text (5), coord_sf (4), geom_col (3), labs (3), element_rect (2), expansion (2), scale_fill_manual (2), geom_bar (1), geom_hline (1), geom_vline (1), scale_color_manual (1), scale_x_discrete (1), scale_y_continuous (1), theme (1)
mutate (42), count (4), rename (3), case_when (1), summarise (1)
classify (7), crop (6), extract (5), focal (4), mask (4), project (4), vect (4), res (3), rescale (3), spatSample (2), as.polygons (1), buffer (1), crs (1), merge (1), perim (1)
geom_spatraster_rgb (6), geom_spatvector (3), whitebox.colors (3), rename (2), filter (1), geom_spatraster (1), scale_fill_whitebox_c (1)
unit (12)
df (6), window (6)
fire_exp_dir (2), fire_exp (1), fire_exp_adjust (1), fire_exp_dir_multi (1), fire_exp_dir_plot (1), fire_exp_extract (1), fire_exp_extract_vis (1)
destPoint (8)
get_tiles (5), get_credit (3)
data (8)
title (5)
pivot_wider (4), pivot_longer (1)
annotation_scale (4)
palette (4)
annulus_window (2)
base
magrittr
ggplot2
dplyr
terra
tidyterra
grid
stats
fireexposuR
geosphere
maptiles
utils
graphics
tidyr
ggspatial
grDevices
MultiscaleDTM
This package features some noteworthy statistical properties which may need to be clarified by a handling editor prior to progressing.
The package has: - code in R (100% in 13 files) and - 2 authors - 1 vignette - no internal data file - 12 imported packages - 11 exported functions (median 52 lines of code) - 11 non-exported functions in R (median 76 lines of code) --- Statistical properties of package structure as distributional percentiles in relation to all current CRAN packages The following terminology is used: - `loc` = "Lines of Code" - `fn` = "function" - `exp`/`not_exp` = exported / not exported All parameters are explained as tooltips in the locally-rendered HTML version of this report generated by [the `checks_to_markdown()` function](https://docs.ropensci.org/pkgcheck/reference/checks_to_markdown.html) The final measure (`fn_call_network_size`) is the total number of calls between functions (in R), or more abstract relationships between code objects in other languages. Values are flagged as "noteworthy" when they lie in the upper or lower 5th percentile. |measure | value| percentile|noteworthy | |:------------------------|-----:|----------:|:----------| |files_R | 13| 66.0| | |files_vignettes | 1| 61.9| | |files_tests | 11| 88.9| | |loc_R | 823| 60.0| | |loc_vignettes | 110| 26.9| | |loc_tests | 306| 61.1| | |num_vignettes | 1| 58.8| | |n_fns_r | 22| 30.4| | |n_fns_r_exported | 11| 48.6| | |n_fns_r_not_exported | 11| 24.7| | |n_fns_per_file_r | 1| 1.9|TRUE | |num_params_per_fn | 3| 29.3| | |loc_per_fn_r | 60| 92.6| | |loc_per_fn_r_exp | 52| 80.4| | |loc_per_fn_r_not_exp | 76| 95.3|TRUE | |rel_whitespace_R | 12| 50.3| | |rel_whitespace_vignettes | 34| 26.2| | |rel_whitespace_tests | 22| 60.9| | |doclines_per_fn_exp | 39| 48.4| | |doclines_per_fn_not_exp | 0| 0.0|TRUE | |fn_call_network_size | 1| 11.5| | ---
Click to see the interactive network visualisation of calls between objects in package
goodpractice
and other checks#### 3a. Continuous Integration Badges [![R-CMD-check.yaml](https://github.com/heyairf/fireexposuR/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/heyairf/fireexposuR/actions) **GitHub Workflow Results** | id|name |conclusion |sha | run_number|date | |-----------:|:--------------------------|:----------|:------|----------:|:----------| | 11492044488|pages build and deployment |success |d24548 | 10|2024-10-24 | | 11492024343|pkgcheck |success |01b668 | 20|2024-10-24 | | 11492024335|pkgdown.yaml |success |01b668 | 9|2024-10-24 | | 11492024339|R-CMD-check.yaml |success |01b668 | 21|2024-10-24 | --- #### 3b. `goodpractice` results #### `R CMD check` with [rcmdcheck](https://r-lib.github.io/rcmdcheck/) rcmdcheck found no errors, warnings, or notes #### Test coverage with [covr](https://covr.r-lib.org/) Package coverage: 76.81 #### Cyclocomplexity with [cyclocomp](https://github.com/MangoTheCat/cyclocomp) No functions have cyclocomplexity >= 15 #### Static code analyses with [lintr](https://github.com/jimhester/lintr) [lintr](https://github.com/jimhester/lintr) found no issues with this package!
|package |version | |:--------|:--------| |pkgstats |0.2.0 | |pkgcheck |0.1.2.61 |
This package is in top shape and may be passed on to a handling editor
@ropensci-review-bot seeking reviewers
Please add this badge to the README of your package repository:
[![Status at rOpenSci Software Peer Review](https://badges.ropensci.org/659_status.svg)](https://github.com/ropensci/software-review/issues/659)
Furthermore, if your package does not have a NEWS.md file yet, please create one to capture the changes made during the review process. See https://devguide.ropensci.org/releasing.html#news
ml13. thank you for your suggestions and investigation into where the problem is! I received the same links from the digital library resources at my university. I will try to arrange to meet with someone to go through the process of applying either method to my repo. This is something I'd rather have some in-person (or at least live) assistance with as I'm still learning... I expect I will be able to find someone to help me with on Monday! I will drop a note once the large files have been dealt with.
Regarding any potential Conflict of Interests from the recommended reviewers. I have not personally collaborated with any of the suggested reviewers on any previous projects. I will note, however, that Liz Chapman has worked on projects with my academic supervisor in the past. My supervisor is listed as an author on this project as their support and guidance to ensure the functions are properly representing the published methods they are automating was invaluable. I have updated the description and citation files to be explicit that I am the main author and contributor to the code and package development after reading this article: https://journal.r-project.org/articles/RJ-2012-009/.
@ropensci-review-bot assign @ronnyhdez as reviewer
@ronnyhdez added to the reviewers list. Review due date is 2024-11-18. Thanks @ronnyhdez for accepting to review! Please refer to our reviewer guide.
rOpenSci’s community is our best asset. We aim for reviews to be open, non-adversarial, and focused on improving software quality. Be respectful and kind! See our reviewers guide and code of conduct for more.
@ronnyhdez: If you haven't done so, please fill this form for us to update our reviewers records.
@ronnyhdez, thanks so much for kindly contributing your time to review this submission.
You're welcome to start your review any time but note that the Git history will be overwritten to remove large files that are no longer needed. You may read the context by finding references to "ml13" in this thread.
Typically this would not be a problem since reviewers rarely push commits. But if you do plan to submit a PR, then keep this in mind and maybe save the change in a patch file and apply it to the repo after it's overwritten. If you end up going this way I'm happy to help.
Hi @maurolepore ,
I got some help today using the bfg repo cleaner... I think we cleared the large tif files from the git history. would you be able to confirm that for me?
Thanks! Air
@heyairf great work! The repo is now about 10 times smaller. It was reduced from 80MB to 9M.
NOW
➜ git du -hd 1 fireexposuR
84K fireexposuR/R
232K fireexposuR/inst
7,9M fireexposuR/.git
16K fireexposuR/vignettes
28K fireexposuR/.github
608K fireexposuR/man
52K fireexposuR/tests
9,0M fireexposuR
BEFORE
➜ git du -hd 1 fireexposuR-backup
84K fireexposuR-backup/R
232K fireexposuR-backup/inst
79M fireexposuR-backup/.git
16K fireexposuR-backup/vignettes
28K fireexposuR-backup/.github
608K fireexposuR-backup/man
52K fireexposuR-backup/tests
80M fireexposuR-backup
@ropensci-review-bot check package
Thanks, about to send the query.
:rocket:
Editor check started
:wave:
git hash: b689adb4
Package License: GPL (>= 3)
The table below tallies all function calls to all packages ('ncalls'), both internal (r-base + recommended, along with the package itself), and external (imported and suggested packages). 'NA' values indicate packages to which no identified calls to R functions could be found. Note that these results are generated by an automated code-tagging system which may not be entirely accurate.
|type |package | ncalls|
|:----------|:-------------|------:|
|internal |base | 151|
|internal |grid | 12|
|internal |stats | 12|
|internal |fireexposuR | 8|
|internal |utils | 8|
|internal |graphics | 5|
|internal |grDevices | 4|
|imports |magrittr | 102|
|imports |ggplot2 | 53|
|imports |dplyr | 51|
|imports |terra | 49|
|imports |tidyterra | 17|
|imports |geosphere | 8|
|imports |maptiles | 8|
|imports |tidyr | 5|
|imports |ggspatial | 4|
|imports |MultiscaleDTM | 2|
|imports |rlang | NA|
|imports |tidyselect | NA|
|suggests |knitr | NA|
|suggests |rmarkdown | NA|
|suggests |testthat | NA|
|linking_to |NA | NA|
Click below for tallies of functions used in each package. Locations of each call within this package may be generated locally by running 's <- pkgstats::pkgstats(
c (26), exp (20), sum (11), cbind (9), data.frame (9), as.data.frame (8), paste (7), ifelse (5), by (4), levels (4), matrix (4), ncol (4), factor (3), labels (3), rep (3), seq (3), all (2), class (2), match (2), max (2), mean (2), names (2), round (2), unique (2), as.factor (1), drop (1), for (1), if (1), missing (1), nrow (1), outer (1), rbind (1), scale (1), seq_len (1), t (1), table (1)
%>% (102)
element_blank (9), aes (8), ggplot (8), element_text (5), coord_sf (4), geom_col (3), labs (3), element_rect (2), expansion (2), scale_fill_manual (2), geom_bar (1), geom_hline (1), geom_vline (1), scale_color_manual (1), scale_x_discrete (1), scale_y_continuous (1), theme (1)
mutate (42), count (4), rename (3), case_when (1), summarise (1)
classify (8), crop (6), extract (5), focal (4), mask (4), project (4), vect (4), res (3), rescale (3), buffer (2), spatSample (2), as.polygons (1), crs (1), merge (1), perim (1)
geom_spatraster_rgb (6), geom_spatvector (3), whitebox.colors (3), rename (2), filter (1), geom_spatraster (1), scale_fill_whitebox_c (1)
unit (12)
df (6), window (6)
fire_exp_dir (2), fire_exp (1), fire_exp_adjust (1), fire_exp_dir_multi (1), fire_exp_dir_plot (1), fire_exp_extract (1), fire_exp_extract_vis (1)
destPoint (8)
get_tiles (5), get_credit (3)
data (8)
title (5)
pivot_wider (4), pivot_longer (1)
annotation_scale (4)
palette (4)
annulus_window (2)
base
magrittr
ggplot2
dplyr
terra
tidyterra
grid
stats
fireexposuR
geosphere
maptiles
utils
graphics
tidyr
ggspatial
grDevices
MultiscaleDTM
This package features some noteworthy statistical properties which may need to be clarified by a handling editor prior to progressing.
The package has: - code in R (100% in 13 files) and - 1 authors - 1 vignette - no internal data file - 12 imported packages - 11 exported functions (median 52 lines of code) - 11 non-exported functions in R (median 76 lines of code) --- Statistical properties of package structure as distributional percentiles in relation to all current CRAN packages The following terminology is used: - `loc` = "Lines of Code" - `fn` = "function" - `exp`/`not_exp` = exported / not exported All parameters are explained as tooltips in the locally-rendered HTML version of this report generated by [the `checks_to_markdown()` function](https://docs.ropensci.org/pkgcheck/reference/checks_to_markdown.html) The final measure (`fn_call_network_size`) is the total number of calls between functions (in R), or more abstract relationships between code objects in other languages. Values are flagged as "noteworthy" when they lie in the upper or lower 5th percentile. |measure | value| percentile|noteworthy | |:------------------------|-----:|----------:|:----------| |files_R | 13| 66.0| | |files_vignettes | 1| 61.9| | |files_tests | 11| 88.8| | |loc_R | 829| 60.2| | |loc_vignettes | 110| 26.9| | |loc_tests | 306| 61.1| | |num_vignettes | 1| 58.8| | |n_fns_r | 22| 30.4| | |n_fns_r_exported | 11| 48.6| | |n_fns_r_not_exported | 11| 24.7| | |n_fns_per_file_r | 1| 1.9|TRUE | |num_params_per_fn | 3| 29.3| | |loc_per_fn_r | 60| 92.6| | |loc_per_fn_r_exp | 52| 80.4| | |loc_per_fn_r_not_exp | 76| 95.3|TRUE | |rel_whitespace_R | 13| 51.5| | |rel_whitespace_vignettes | 34| 26.2| | |rel_whitespace_tests | 22| 60.9| | |doclines_per_fn_exp | 39| 48.4| | |doclines_per_fn_not_exp | 0| 0.0|TRUE | |fn_call_network_size | 1| 11.5| | ---
Click to see the interactive network visualisation of calls between objects in package
goodpractice
and other checks#### 3a. Continuous Integration Badges [![R-CMD-check.yaml](https://github.com/heyairf/fireexposuR/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/heyairf/fireexposuR/actions) **GitHub Workflow Results** | id|name |conclusion |sha | run_number|date | |-----------:|:--------------------------|:----------|:------|----------:|:----------| | 11616030520|pages build and deployment |success |771e1a | 14|2024-10-31 | | 11615992346|pkgcheck |success |b689ad | 24|2024-10-31 | | 11615992348|pkgdown.yaml |success |b689ad | 13|2024-10-31 | | 11615992344|R-CMD-check.yaml |failure |a18d26 | 26|2024-10-31 | --- #### 3b. `goodpractice` results #### `R CMD check` with [rcmdcheck](https://r-lib.github.io/rcmdcheck/) rcmdcheck found no errors, warnings, or notes #### Test coverage with [covr](https://covr.r-lib.org/) Package coverage: 76.99 #### Cyclocomplexity with [cyclocomp](https://github.com/MangoTheCat/cyclocomp) No functions have cyclocomplexity >= 15 #### Static code analyses with [lintr](https://github.com/jimhester/lintr) [lintr](https://github.com/jimhester/lintr) found no issues with this package!
|package |version | |:--------|:--------| |pkgstats |0.2.0.46 | |pkgcheck |0.1.2.63 |
This package is in top shape and may be passed on to a handling editor
@heyairf just to keep you posted. I'm still searching for a second reviewer. It can take some time because we try to avoid reaching out to too many potential reviewers at a time. Once we send a request we wait a few days before we reach out to the next.
Thanks for your patience.
No problem at all, @maurolepore
In the meantime, is it okay if I make some small changes as issues come up? My colleagues have been using the package with their own data and I'd like to make some improvements based on their feedback.
Air
Thanks @heyairf for your understanding.
So great to hear the package is getting useful feedback! In the end that's what we are all here for.
The safest strategy is to add any change to a branch other than main
-- which I see is the default branch on the GitHub repo. This way we avoid forcing the existing reviewer to work against a moving target. Then you can incorporate everyone's feedback at the end.
Another strategy we might try is to touch base with the existing reviewer. If he hasn't started yet, then you may have a short window of time when you could push changes safely.
I'll ask in the comment below.
Dear @ronnyhdez, thanks again accepting so quickly to review this submission.
Happily the package already got some useful feedback and we're wondering whether to push it to main
or a separate branch. What makes the difference is whether or not you already started your review.
Please do not feel pressed in any direction. I'm neither asking you to rush nor to slow down. I'm just checking the status to plan the best merge-strategy.
With that long context in mind the question is this:
Have you already started your review (or are you planning to start it soon -- say before the end of this week)?
Hi @heyairf and @maurolepore!
There is no problem with me if you merge the changes to the main branch.
I do have some notes about my review but mostly on documentation. I will continue this weekend with the review, so I could use the repo with new changes.
Thanks for letting me know!
@ronnyhdez thanks so much for the update and your flexibility.
@heyairf I hope the last few comments help you decide which strategy is best to ensure you can capitalize on the feedback you got while also ensuring Ronny's work remains relevant and smooth.
Let me know if anything is unclear or you need some other help.
:calendar: @ronnyhdez you have 2 days left before the due date for your review (2024-11-18).
Hi @heyairf and @maurolepore! Thanks for the chance to review this package! Here are my suggestions.
@heyairf if there is something that is not clear, please let me know. This is my first review, so I'm also learning about the process :smiley:
Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
The package includes all the following forms of documentation:
URL
, BugReports
and Maintainer
(which may be autogenerated via Authors@R
).Estimated hours spent reviewing: 8
It would be helpful to clarify how coordinate reference systems are handled throughout the package, particularly whether transformations affect only the visualization outputs or also the underlying analyses. For example, the use of tiles involves the re-projection to EPSG:3857, which is necessary for visualization. Given that the target audience of this package seems to be wildfire scientists, documenting these spatial operations would enhance the functions' usability.
Also, it would be helpful to indicate the geographical scope of the implemented methods. From the references included in the README, it seems that the studies were all conducted in Alberta, Canada. If the methods are specifically calibrated for this region's characteristics, this should be explicitly stated in the package documentation to help users assess the applicability to their study areas.
While references to scientific publications provide valuable methodological context, the function documentation would be more effective if it primarily focused on usage and implementation details, with citations serving as supplementary information. This would enhance the user experience by providing immediate access to practical implementation guidance.
I would suggest adding a CITATION file (see rOpenSci guidelines here) to help users properly acknowledge your work when using the package in their research.
The checkpoiints above state that there should be a 'contribution guidelines'. Here is some information that can guide you about how to include it.
Here I have some comments/suggestions to specific sections of the package:
"This package automates the methods previously documented in a series of scientific publications.
Beverly et al. 2010 Introduces wildfire exposure and wildfire transmission distances for community scale assessments Beverly et al. 2021 Validation of the wildfire exposure metric at a landscape scale with observed fire history Beverly and Forbes 2023 Directional vulnerability assessment and methodology"
It's great to have references to published papers in a package. Nonetheless, the documentation should focus on providing clear, step-by-step instructions for using the package, rather than primarily redirecting users to scientific papers for basic usage information.
I suggest including brief explanations of the methods directly in the README, along with the citations. This would allow users to understand the package's functionality immediately from the documentation, while still having access to academic papers for a deeper understanding. Moreover, one of the papers is behind a paywall, so if a user does not have access to the Journal, it will complicate the adoption of the package (If the package is documented as papers first, before using the code).
The functions in this package require the pre-preparation of data; an accompanying paper (currently in preparation) details suggestions for data acquisition and preparation in accordance with various budget limitations and user experience levels."
This is also related to the previous comment. The scientific publication would make your tools more robust, and for sure it should be included in the package documentation. But try to prioritize practical usage guidance explanations rather than redirecting users to a paper.
Think about a potential user of your package. They need to analyze wildfire exposure and found your package. The first thing would be to glimpse the documentation and give it a try just to check if it works and what kind of outputs it can produce. A summary explanation with some example code will help that user to perform a skim of the package and make the decision to use or not.
So, I wouldn't redirect them to a scientific paper as the first step. Make the user explore your package and once they have tried, give them the possibility to go for further details on the published papers.
Think of the README as an abstract: it should concisely present the package's purpose and basic workflow. Users should be able to quickly assess if the package meets their needs through example code and the workflow, before diving into the underlying scientific publications for advanced theory understanding.
The package already have vignettes which is great to document and explain further how an user can benefit from using the package. There are no rules on how documentation should be arrange in vignettes, but the general trend I see and find useful is to have:
index page: Almost the same as the README. The first point of entry of an user could be the GitHub repository or the package website. So, to have the "abstract" on both sides is good.
Get started page: This would be the place to explain the workflow of the package. I would try to not expand to much here about scientific details but rather on the workflow and steps to get the primary functionality of the package explained. The deeper topics would be referenced to a vignette article.
Articles: This is the section I would use to explain further details and deeper concepts that were not covered on the "Get started" page. If an user wants to go deeper in understanding specific details of methods, sciencie and code, this is the section to look for that information.
Some examples of this arrangement are dplyr and gt
I liked this vignette because it already have the primary functionality and workflow of the package. I would use it as the 'Get started' page. It takes the user through a set of steps to check quickly how the package works.
About the code snippets, I suggest using complete names for the objects and
linespaces to separate steps. For example, the first chunk of code in the
communityscale
vignette looks like this:
library(terra)
# read example hazard data ----------------------------------
filepath <- "extdata/hazard.tif"
haz <- terra::rast(system.file(filepath, package = "fireexposuR"))
# read example AOI
filepath <- "extdata/builtsimpleexamplegeom.csv"
g <- read.csv(system.file(filepath, package = "fireexposuR"))
aoi <- terra::vect(as.matrix(g), "polygons", crs = haz)
# generate random points
pts <- terra::spatSample(aoi, 200)
#' # ----------------------------------------------------------
I would change it the following way:
library(terra)
# Path to sample hazard data
hazard_file_path <- "extdata/hazard.tif"
# Read sample hazard data
hazard <- rast(system.file(file_path, package = "fireexposuR"))
# Read example Area of Interest (AOI)
aoi_file_path <- "extdata/builtsimpleexamplegeom.csv"
aoi_data <- read.csv(system.file(aoi_file_path, package = "fireexposuR"))
aoi_polygon <- vect(as.matrix(aoi_data), "polygons", crs = hazard)
# Generate random points
points <- spatSample(aoi_polygon, 200)
That way, it would be easier (less cognitive load) for a new user to follow the examples and the objects being created and transformed within. I suggest implementing this in all the user documentation.
If there is a need to explain further something on this 'Get started' page, I would create an article, but I would leave in this 'Get Started' page, a link to that more detailed explanation.
Further explanations on the science behind would be in 'Articles' vignettes.
Finally, I think the package's visualization examples would be even more engaging with terrestrial locations, as this would highlight the integration with base map tiles and provide users with examples that directly relate to typical wildfire analysis scenarios. If there are specific reasons/constrains for using oceanic locations in the examples it's fine.
In general, there are several functions on which the documentation starts with
papers reference. I would divide this using the @description
and implementing
@details
. In the roxygen
documentation
states that, after the title:
The second paragraph is the description: this comes first in the documentation and should briefly describe what the function does.
The third and subsequent paragraphs go into the details: this is a (often long) section that comes after the argument description and should provide any other important details of how the function operates. The details are optional.
I suggest using the details section to include the reference to a paper if needed.
There are some functions which have fixed values in their bodies. I'm wondering if those could be arguments with default values that the user can change. This is one example
fire_exp_map_class()
function documentation states: Scales and colors are
determined by the parameter classify which can be set to "local" or "landscape"
This are the kind of explanations I would include in a 'article' in the
vignettes. Going further explaining to a user the differences of some of the
arguments options in the results and the theory behind it could be part of the
things to be explained in a 'article' vignette.@ronnyhdez thanks so much for your thoughtful review!
We won't use your input for a few weeks, until @heyairf has had the chance to address both reviews (and I'm still searching for the second reviewer). Then I'll contact you again for a final approval unless there is something else that needs to be addressed.
Submitting Author Name: Air Forbes Submitting Author Github Handle: !--author1-->@heyairf<!--end-author1-- Repository: https://github.com/heyairf/fireexposuR Version submitted: 1.0.0 Submission type: Standard Editor: !--editor-->@maurolepore<!--end-editor-- Reviewers: !--reviewers-list-->@ronnyhdez<!--end-reviewers-list--
Archive: TBD Version accepted: TBD Language: en
Scope
Please indicate which category or categories from our package fit policies this package falls under: (Please check an appropriate box below. If you are unsure, we suggest you make a pre-submission inquiry.):
Explain how and why the package falls under these categories (briefly, 1-2 sentences):
This package was developed to share methodologies from existing research by modifying/expanding existing functions. The functions automate and standardize assessments to increase access and quality assurance in the calculation for interested users.
Users interested in wildfire risk assessments including but not limited to researchers, consultants, planners, land use decision makers.
There are no other R packages that fill this role.
n/a
#652
pkgcheck
items which your package is unable to pass.all pkgcheck's are currently passing
Technical checks
Confirm each of the following by checking the box.
This package:
Publication options
[ ] Do you intend for this package to go on CRAN?
[ ] Do you intend for this package to go on Bioconductor?
[ ] Do you wish to submit an Applications Article about your package to Methods in Ecology and Evolution? If so:
MEE Options
- [ ] The package is novel and will be of interest to the broad readership of the journal. - [ ] The manuscript describing the package is no longer than 3000 words. - [ ] You intend to archive the code for the package in a long-term repository which meets the requirements of the journal (see [MEE's Policy on Publishing Code](http://besjournals.onlinelibrary.wiley.com/hub/journal/10.1111/(ISSN)2041-210X/journal-resources/policy-on-publishing-code.html)) - (*Scope: Do consider MEE's [Aims and Scope](http://besjournals.onlinelibrary.wiley.com/hub/journal/10.1111/(ISSN)2041-210X/aims-and-scope/read-full-aims-and-scope.html) for your manuscript. We make no guarantee that your manuscript will be within MEE scope.*) - (*Although not required, we strongly recommend having a full manuscript prepared when you submit here.*) - (*Please do not submit your package separately to Methods in Ecology and Evolution*)Code of conduct