Closed datapumpernickel closed 10 months ago
Thanks for submitting to rOpenSci, our editors and @ropensci-review-bot will reply soon. Type @ropensci-review-bot help
for help.
:rocket:
Editor check started
:wave:
git hash: e82abc78
Package License: GPL-3
The table below tallies all function calls to all packages ('ncalls'), both internal (r-base + recommended, along with the package itself), and external (imported and suggested packages). 'NA' values indicate packages to which no identified calls to R functions could be found. Note that these results are generated by an automated code-tagging system which may not be entirely accurate.
|type |package | ncalls|
|:----------|:---------|------:|
|internal |base | 78|
|internal |comtradr | 45|
|internal |stats | 15|
|internal |graphics | 7|
|internal |methods | 5|
|internal |utils | 4|
|imports |cli | 36|
|imports |httr2 | 14|
|imports |rlang | 4|
|imports |purrr | 4|
|imports |fs | 3|
|imports |lubridate | 2|
|imports |stringr | 1|
|imports |lifecycle | NA|
|imports |readr | NA|
|imports |askpass | NA|
|imports |poorman | NA|
|suggests |covr | NA|
|suggests |dplyr | NA|
|suggests |ggplot2 | NA|
|suggests |httptest2 | NA|
|suggests |knitr | NA|
|suggests |rmarkdown | NA|
|suggests |testthat | NA|
|suggests |vcr | NA|
|linking_to |NA | NA|
Click below for tallies of functions used in each package. Locations of each call within this package may be generated locally by running 's <- pkgstats::pkgstats(
c (44), list (4), body (3), paste0 (3), colnames (2), data.frame (2), get (2), if (2), return (2), switch (2), as.numeric (1), by (1), length (1), message (1), names (1), new.env (1), paste (1), seq.Date (1), Sys.getenv (1), tolower (1), which (1), with (1)
ct_get_ref_table (8), check_date (2), comtrade_after (2), comtrade_error_body (2), comtrade_is_transient (2), ct_build_request (2), ct_check_params (2), ct_download_ref_table (2), ct_perform_request (2), ct_process_response (2), is_year (2), check_clCode (1), check_customsCode (1), check_flowCode (1), check_freq (1), check_motCode (1), check_partner2Code (1), check_partnerCode (1), check_reporterCode (1), check_type (1), convert_to_date (1), ct_commodity_db_type (1), ct_commodity_lookup (1), ct_country_lookup (1), ct_get_data (1), ct_get_remaining_hourly_queries (1), ct_get_reset_time (1), ct_register_token (1)
cli_inform (34), cli_warn (2)
update (9), frequency (5), time (1)
resp_body_json (4), resp_header (4), request (2), req_error (1), req_headers (1), req_retry (1), req_throttle (1)
text (7)
isGroup (4), new (1)
map (2), map_chr (1), map_int (1)
arg_match (4)
data (4)
path_package (3)
year (2)
str_extract (1)
base
comtradr
cli
stats
httr2
graphics
methods
purrr
rlang
utils
fs
lubridate
stringr
This package features some noteworthy statistical properties which may need to be clarified by a handling editor prior to progressing.
The package has: - code in R (100% in 10 files) and - 3 authors - 1 vignette - 2 internal data files - 11 imported packages - 16 exported functions (median 5 lines of code) - 81 non-exported functions in R (median 5 lines of code) --- Statistical properties of package structure as distributional percentiles in relation to all current CRAN packages The following terminology is used: - `loc` = "Lines of Code" - `fn` = "function" - `exp`/`not_exp` = exported / not exported All parameters are explained as tooltips in the locally-rendered HTML version of this report generated by [the `checks_to_markdown()` function](https://docs.ropensci.org/pkgcheck/reference/checks_to_markdown.html) The final measure (`fn_call_network_size`) is the total number of calls between functions (in R), or more abstract relationships between code objects in other languages. Values are flagged as "noteworthy" when they lie in the upper or lower 5th percentile. |measure | value| percentile|noteworthy | |:------------------------|-----:|----------:|:----------| |files_R | 10| 59.0| | |files_vignettes | 1| 68.4| | |files_tests | 10| 90.7| | |loc_R | 742| 59.6| | |loc_vignettes | 232| 54.7| | |loc_tests | 316| 65.4| | |num_vignettes | 1| 64.8| | |data_size_total | 6739| 68.8| | |data_size_median | 3369| 72.0| | |n_fns_r | 97| 75.7| | |n_fns_r_exported | 16| 60.6| | |n_fns_r_not_exported | 81| 79.5| | |n_fns_per_file_r | 8| 81.5| | |num_params_per_fn | 1| 1.6|TRUE | |loc_per_fn_r | 5| 8.1| | |loc_per_fn_r_exp | 5| 7.3| | |loc_per_fn_r_not_exp | 5| 9.7| | |rel_whitespace_R | 21| 64.9| | |rel_whitespace_vignettes | 37| 58.2| | |rel_whitespace_tests | 15| 55.5| | |doclines_per_fn_exp | 16| 7.0| | |doclines_per_fn_not_exp | 0| 0.0|TRUE | |fn_call_network_size | 35| 58.7| | ---
Click to see the interactive network visualisation of calls between objects in package
goodpractice
and other checks#### 3a. Continuous Integration Badges [![R-CMD-check.yaml](https://github.com/ropensci/comtradr/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/ropensci/comtradr/actions) **GitHub Workflow Results** | id|name |conclusion |sha | run_number|date | |----------:|:-------------|:----------|:------|----------:|:----------| | 6522795714|pkgcheck |success |e82abc | 5|2023-10-15 | | 6522795715|R-CMD-check |NA |e82abc | 50|2023-10-15 | | 6522795709|test-coverage |success |e82abc | 30|2023-10-15 | --- #### 3b. `goodpractice` results #### `R CMD check` with [rcmdcheck](https://r-lib.github.io/rcmdcheck/) R CMD check generated the following note: 1. checking data for non-ASCII characters ... NOTE Note: found 11 marked UTF-8 strings R CMD check generated the following check_fail: 1. rcmdcheck_non_ascii_characters_in_data #### Test coverage with [covr](https://covr.r-lib.org/) Package coverage: 79.64 #### Cyclocomplexity with [cyclocomp](https://github.com/MangoTheCat/cyclocomp) No functions have cyclocomplexity >= 15 #### Static code analyses with [lintr](https://github.com/jimhester/lintr) [lintr](https://github.com/jimhester/lintr) found the following 183 potential issues: message | number of times --- | --- Avoid library() and require() calls in packages | 14 Lines should not be more than 80 characters. | 169
|package |version | |:--------|:-------| |pkgstats |0.1.3.9 | |pkgcheck |0.1.2.9 |
This package is in top shape and may be passed on to a handling editor
@ropensci-review-bot assign @noamross as editor
Assigned! @noamross is now the editor
@ropensci-review-bot seeking reviewers
Please add this badge to the README of your package repository:
[![Status at rOpenSci Software Peer Review](https://badges.ropensci.org/613_status.svg)](https://github.com/ropensci/software-review/issues/613)
Furthermore, if your package does not have a NEWS.md file yet, please create one to capture the changes made during the review process. See https://devguide.ropensci.org/releasing.html#news
Thanks @datapumpernickel for your submission (and your work taking over maintenance and bringing this package up to speed). As the bot says, this package is in nice shape, and I'll be seeking reviewers.
Thank you, I am looking forward to the review and feedback!
@ropensci-review-bot assign @ernestguevarra as reviewer
@ernestguevarra added to the reviewers list. Review due date is 2023-11-06. Thanks @ernestguevarra for accepting to review! Please refer to our reviewer guide.
rOpenSci’s community is our best asset. We aim for reviews to be open, non-adversarial, and focused on improving software quality. Be respectful and kind! See our reviewers guide and code of conduct for more.
@ernestguevarra: If you haven't done so, please fill this form for us to update our reviewers records.
:calendar: @ernestguevarra you have 2 days left before the due date for your review (2023-11-06).
Hi @datapumpernickel. Just a quick note to say that I am almost done with a comprehensive pass at reviewing your package! Sorry that I have been delayed. When I accepted the review role, I forgot where I should look to see the review process and my GitHub notifications were getting archived on my email.
Otherwise, I will be posting my review notes here in the next couple of hours.
Hi @ernestguevarra, no worries. You are just in time! Looking forward to the review. Thank you for the work to look into everything already!
@ropensci-review-bot assign @potterzot as reviewer
@potterzot added to the reviewers list. Review due date is 2023-11-27. Thanks @potterzot for accepting to review! Please refer to our reviewer guide.
rOpenSci’s community is our best asset. We aim for reviews to be open, non-adversarial, and focused on improving software quality. Be respectful and kind! See our reviewers guide and code of conduct for more.
@potterzot: If you haven't done so, please fill this form for us to update our reviewers records.
@datapumpernickel, here is my review:
Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
The package includes all the following forms of documentation:
URL
, BugReports
and Maintainer
(which may be autogenerated via Authors@R
).Estimated hours spent reviewing:
Both local install and remote install (via GitHub) of the package proceeded without issues and as expected/documented.
{devtools}
checks. Check results were:── R CMD check results ──────────────────────────────────────────────────────────────────────────── comtradr 0.4.0.0 ────
Duration: 19.1s
0 errors ✔ | 0 warnings ✔ | 0 notes ✔
{devtools}
test. Test results were:══ Results ══════════════════════════════════════════════════════════════════════════════════════════════════════════════
Duration: 2.0 s
[ FAIL 0 | WARN 0 | SKIP 0 | PASS 78 ]
{goodpractice}
test/check:── GP comtradr ──────────────────────────────────────────────────────────────────────────────────────────────────────────
It is good practice to
✖ write unit tests for all functions, and all package code in general. 79% of code lines are
covered by test cases.
R/ct_build_request.R:52:NA
R/ct_check_params.R:296:NA
R/ct_check_params.R:300:NA
R/ct_check_params.R:301:NA
R/ct_check_params.R:302:NA
... and 97 more lines
✖ avoid long code lines, it is bad for readability. Also, many people prefer editor windows that
are about 80 characters wide. Try make your lines shorter than 80 characters
data-raw/cmd_codes.R:7:81
data-raw/country_codes.R:7:81
data-raw/country_codes.R:22:81
data-raw/country_codes.R:53:81
data-raw/DATASET.R:11:81
... and 164 more lines
✖ fix this R CMD check NOTE: Note: found 11 marked UTF-8 strings
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
On unit tests for all functions, as per rOpenSci Packaging Guide documentation on tests, a minimum coverage of 75% is required so this should be OK. I looked at a few of the lines that are not covered and it seems to be those that tend to be hard to test with CRAN and rhub in mind. Maybe for the sake of being thorough and complete, a final look at non-covered lines of code can be looked at and see if some of them can be covered with just some adjustments to existing test suites and/or adding straightforward tests to get them covered.
On avoiding long lines, a set of these are from your scripts inside the data-raw
directory which processes your reference data that you use in the package. I am not 100% sure about this but for those, I think the strict linting may not be necessary. I think the data-raw
directory can be skipped in the lint checks. For others, the lines that exceed tend to be long URLs and long messages that guide the user regarding a warning or an error. It is always challenging for these as URLs can't really be split/wrapped into more than one line whilst error/warning/messages may not show up on console in a tidy/pretty way. For this, I might suggest adding # nolint
at the end of the long line to quiet down these messages from the good practice check.
On finding 11 marked UTF-8 strings, this NOTE doesn't come up on devtools::check()
so maybe it comes out on rhub checks? I think these are likely coming from the datasets included in the package particulary country names with accented characters. I am not fully certain what best practice is for this. I think it is ideal that the actual names provided by Comtrade are used in the dataset but to be able quiet down this issue, it might be needed to convert text to ASCII. For this stringi::stri_enc_toascii()
might be useful.
devtools::spell_check()
, I see: WORD FOUND IN
️ README.md:49,102,124
README.Rmd:40,83,106
accomodate README.md:28
README.Rmd:30
arg ct_commodity_lookup.Rd:40
NEWS.md:26,56,57,58
comtradr.Rmd:98,194,211
args ct_commodity_lookup.Rd:45
NEWS.md:50
ASEAN country_codes.Rd:16
CMD README.md:10
README.Rmd:18
Codecov README.md:12
README.Rmd:19
comtrade ct_build_request.Rd:10
ct_perform_request.Rd:10,17
get_primary_comtrade_key.Rd:5,10
set_primary_comtrade_key.Rd:10,13
README.md:91
README.Rmd:75
comtradr.Rmd:61
Comtrade comtradr-package.Rd:7,11
country_codes.Rd:17,18,28
ct_commodity_lookup.Rd:5,52,54,57
ct_get_data.Rd:5,53,69,77
ct_perform_request.Rd:5,20
get_primary_comtrade_key.Rd:13
set_primary_comtrade_key.Rd:5,16
title:1
description:1,2
NEWS.md:29,54,77
README.md:15,16,24
README.Rmd:22,22,28
comtradr.Rmd:29,29,119,183,203,211
COMTRADE get_primary_comtrade_key.Rd:10,13
set_primary_comtrade_key.Rd:16
env get_primary_comtrade_key.Rd:13
set_primary_comtrade_key.Rd:16
func NEWS.md:26,32,34,36
github comtradr-package.Rd:33,34
https comtradr-package.Rd:33,34
httr ct_build_request.Rd:17,20
ct_get_data.Rd:47,66
ct_process_response.Rd:10,20
ℹ️ README.md:95
README.Rmd:77
comtradr.Rmd:63
iso ct_country_lookup.Rd:16
ct_process_response.Rd:20
json ct_perform_request.Rd:17
ct_process_response.Rd:20
NEWS.md:34
natively comtradr.Rmd:192
onboarding comtradr-package.Rd:33,34
ORCID comtradr-package.Rd:23,33,34
Readme NEWS.md:4
repo README.md:51
README.Rmd:42
ropensci comtradr-package.Rd:33,34
README.md:150
README.Rmd:130
rOpenSci comtradr-package.Rd:33,34
README.md:9
README.Rmd:17
stringified NEWS.md:36
testthat NEWS.md:83,99
twi ct_pretty_cols.Rd:9
useability README.md:36
README.Rmd:32
For the README, please address some of the non-false positive results produced by devtools::spell_check()
. After editing the spelling of non-false positive results, please run usethis::use_spell_check()
which will allow you to record the words that have been false-positvely flagged as misspelled into a WORDLIST
kept in the inst
folder. Once they are on this list, they will not anymore be flagged as misspelled. Please note that when running usethis::use_spell_check()
, language will default to en-US
and will be added to your DESCRIPTION file. Also, the spelling
package will be added in the Suggests
.
For the README, in the sentence of the first paragraph, it might be good to make this consistent with the text used in the DESCRIPTION file so something like "R package for interfacing with and extracting data from ..." to make the two consistent and coherent with each other. Otherwise, README clearly states the problems the software is designed to solve and its target audience
Maybe organise the entries on the DESCRIPTION file more coherently. For example, move the line for Maintainer right after the line/lines for Authors@R and the entries for Suggests be put right after the entries for Imports. The DESCRIPTION file will still work appropriately without changing these but the file will read better if this change is made.
No issues with the NAMESPACE
Key functions of the package have help files that are generally appropriate and describe in fair detail what the function does and what it needs to produce such output.
Supporting functions or utility functions, however, tend to be less specific in the description of their functionality. In most cases, these are functions that users will most likely not use directly as they are used by the package within the key/main functions. As such, it might be good to note this with the description of the function.
Some other functions are legacy functions that are lined up for deprecation or have been superseded. For these, it might be good to have text in the description that indicates what the deprecation plan is for these functions.
In the package website built using {pkgdown}
, the references page for the function help files might benefit from some organisation of the functions. Groupings will depend on how you want to organise them but generally, they are groupings based on related functionalities. Also, this a good way to group data elements, legacy functions and functions meant for deprecation. This can be done by just creating _pkgdown.yml
file by issuing the following command pkgdown::use_pkgdown()
. Once that file is created, you can edit/customise it in many ways including grouping of the reference pages for each function. To see how this is done, look at the _pkgdown.yml
file of the {pkgdown}
package website - https://github.com/r-lib/pkgdown/blob/main/pkgdown/_pkgdown.yml.
The package currently has one vignette which gets produced as the Getting Started vignette. It expands the content of the README with more focused examples. This is good as it re-emphasises the main use case of {comatradr}
package.
It might be good to consider adding at least one other vignette aimed at those who have used or are familiar with the earlier iteration of the package and explain/differentiate through a specific use case (i.e., retrieving data) the previous approach as compared to the current approach. In this, links to the new data structure for Comtrade provided by the UN will be helpful as this will give the user an idea of what new fields they can expect in the output of the function that retrieves data.
Another vignette that might be useful specifically for those who used the previous version to bulk download data from Comtrade is about what users need to consider if they are to going to do a bulk download and what features of the new package help with this (limits to number of years, limits to number of characters in API calls, throttling of the API calls). These elements are in different places in the documentation but might be good to be put together in a vignette using a worked example of a use case of performing a bulk download (i.e., multiple years, lots of commodities).
I have been using the new {comtradr}
package since June of this year for a project I am involved in that is downloading data on 963 commodities for all countries from 2000 to 2023. The team I am working with had original/previous code that used the {tradestatistics}
package to do this at a time of the previous API specification for the UN Comtrade. When I joined the team for the second phase of the project, the API changed and the both the {tradestatistics}
package and the legacy {comtradr}
package underwent changes to accommodate the new API specifiations. For the purposes that we need it for (bulk download), the updated {comtradr}
package was fit for our needs as its main function for getting data factored in all the rate-limits that the new API has put in place and also includes API call throttling (which I really think is very well thought of and well implemented). Whilst there are still sometimes 500 error issues due to the UN Comtrade server timing out (probably due to heavy query loads), the {comtradr}
package has really been able to perform its single most important functionality of retrieving data from UN Comtrade.
So, based on this experience since June, I can say that I've been able to push the {comtradr}
functionality to its limits and given restrictions on a free API account, we are happy with what we are able to retrieve the data we need using this package.
A few things that maybe considered by the author/maintainer (probably as a new feature in upcoming updates) are:
Better mechanism to catch the infrequent 500 errors that popup secondary to the UN Comtrade server timing out and an appropriate error message that gives information regarding this. Because of how good {comtradr}
deals with the throttling, if you are doing a reasonbly sized bulk download, the 500 errors don't come up and the waiting time messaging on console is informative enough. But for instances as I described above for what we used it for, it might be useful to have a bit more information messaging when indeed a 500 error comes up.
Maybe consider adding an argument that allows user to specify how much to throttle the API calls. From what I can see in the functions of the package, the throttling is fixed to 6 calls per minute. In our use case above, we would have liked to have had the capability to throttle the calls a little bit more.
Maybe consider functions that deal with the added parameters for paid accounts. I think this is not easy to implement as it would mean having a paid account yourself so this might be down the priority list. But I think it would be good in the future to have these particularly for the bulk download capabilities of a paid for account.
@datapumpernickel you may notice that a big chunk of my review has been on the documentation. As I was doing the review, I forked your package and have been making edits on the docs/help files/vignettes as I was reviewing them. I am happy to make a pull request with my edits so you can have a look at what these changes may look like. Let me know if you think this would be helpful.
Otherwise, thank you for your good work on getting {comtradr}
revived. It's great work that you have put in and I think we can polish it even more! Well done!
Hi @ernestguevarra!
first of all, thank you again for your thorough review and helpful comments - this is great! I am also really happy to hear that you were able to already put the package to use and found it useful.
1) Yes, I would be very happy to have a pull request with your edits and corrections and merge it into the package. Maybe you could pull into the dev
branch. Thanks!
2) I will subsequently work through your comments and either make the respective changes or add it to the list of future features! Already now I can say, allowing the user to control throttling is a great idea and easy to implement. I was hesitant to add more arguments to the already long list, but I totally see how that is necessary. Paid features are indeed on the list for future development, I hope to have premium access next year.
@datapumpernickel I will finalise my edits today on the package (mainly documentation) and will make a PR to dev
as recommended.
Also, you can just use the PR as a reference for what I am suggesting to change and you can implement the changes yourself as you update the package. I think sometimes that might be easier so you have full control of the changes. So, do with it as you deem best and most efficient for your workflow.
👋 Hello @potterzot, just a friendly reminder your review is due.
@ropensci-review-bot check package
Thanks, about to send the query.
:rocket:
The following problem was found in your submission template:
:wave:
git hash: c44b45ce
Package License: GPL-3
The table below tallies all function calls to all packages ('ncalls'), both internal (r-base + recommended, along with the package itself), and external (imported and suggested packages). 'NA' values indicate packages to which no identified calls to R functions could be found. Note that these results are generated by an automated code-tagging system which may not be entirely accurate.
|type |package | ncalls|
|:----------|:---------|------:|
|internal |base | 78|
|internal |comtradr | 46|
|internal |stats | 11|
|internal |graphics | 4|
|internal |methods | 4|
|internal |utils | 2|
|imports |cli | 37|
|imports |httr2 | 14|
|imports |rlang | 4|
|imports |purrr | 4|
|imports |fs | 2|
|imports |lubridate | 2|
|imports |stringr | 1|
|imports |lifecycle | NA|
|imports |readr | NA|
|imports |askpass | NA|
|imports |poorman | NA|
|suggests |covr | NA|
|suggests |dplyr | NA|
|suggests |ggplot2 | NA|
|suggests |httptest2 | NA|
|suggests |knitr | NA|
|suggests |rmarkdown | NA|
|suggests |spelling | NA|
|suggests |testthat | NA|
|suggests |vcr | NA|
|linking_to |NA | NA|
Click below for tallies of functions used in each package. Locations of each call within this package may be generated locally by running 's <- pkgstats::pkgstats(
c (45), list (4), body (3), paste0 (3), colnames (2), data.frame (2), get (2), if (2), return (2), switch (2), as.numeric (1), by (1), length (1), message (1), names (1), new.env (1), paste (1), seq.Date (1), Sys.getenv (1), tolower (1), which (1)
ct_get_ref_table (8), check_date (2), comtrade_after (2), comtrade_error_body (2), comtrade_is_transient (2), ct_build_request (2), ct_check_params (2), ct_download_ref_table (2), ct_perform_request (2), ct_process_response (2), is_year (2), check_clCode (1), check_customsCode (1), check_flowCode (1), check_freq (1), check_motCode (1), check_partner2Code (1), check_partnerCode (1), check_reporterCode (1), check_type (1), convert_to_date (1), ct_commodity_db_type (1), ct_commodity_lookup (1), ct_country_lookup (1), ct_get_data (1), ct_get_remaining_hourly_queries (1), ct_get_reset_time (1), ct_register_token (1), ct_search (1)
cli_inform (35), cli_warn (2)
resp_body_json (4), resp_header (4), request (2), req_error (1), req_headers (1), req_retry (1), req_throttle (1)
frequency (5), update (5), time (1)
text (4)
isGroup (4)
map (2), map_chr (1), map_int (1)
arg_match (4)
path_package (2)
year (2)
data (2)
str_extract (1)
base
comtradr
cli
httr2
stats
graphics
methods
purrr
rlang
fs
lubridate
utils
stringr
This package features some noteworthy statistical properties which may need to be clarified by a handling editor prior to progressing.
The package has: - code in R (100% in 10 files) and - 3 authors - 1 vignette - 2 internal data files - 11 imported packages - 17 exported functions (median 6 lines of code) - 80 non-exported functions in R (median 4 lines of code) --- Statistical properties of package structure as distributional percentiles in relation to all current CRAN packages The following terminology is used: - `loc` = "Lines of Code" - `fn` = "function" - `exp`/`not_exp` = exported / not exported All parameters are explained as tooltips in the locally-rendered HTML version of this report generated by [the `checks_to_markdown()` function](https://docs.ropensci.org/pkgcheck/reference/checks_to_markdown.html) The final measure (`fn_call_network_size`) is the total number of calls between functions (in R), or more abstract relationships between code objects in other languages. Values are flagged as "noteworthy" when they lie in the upper or lower 5th percentile. |measure | value| percentile|noteworthy | |:------------------------|-----:|----------:|:----------| |files_R | 10| 59.0| | |files_vignettes | 1| 68.4| | |files_tests | 12| 92.5| | |loc_R | 763| 60.4| | |loc_vignettes | 232| 54.7| | |loc_tests | 370| 68.7| | |num_vignettes | 1| 64.8| | |data_size_total | 6739| 68.8| | |data_size_median | 3369| 72.0| | |n_fns_r | 97| 75.7| | |n_fns_r_exported | 17| 62.4| | |n_fns_r_not_exported | 80| 79.2| | |n_fns_per_file_r | 8| 81.5| | |num_params_per_fn | 1| 1.6|TRUE | |loc_per_fn_r | 5| 8.1| | |loc_per_fn_r_exp | 6| 10.5| | |loc_per_fn_r_not_exp | 4| 9.3| | |rel_whitespace_R | 21| 65.5| | |rel_whitespace_vignettes | 37| 58.2| | |rel_whitespace_tests | 15| 58.1| | |doclines_per_fn_exp | 16| 7.3| | |doclines_per_fn_not_exp | 0| 0.0|TRUE | |fn_call_network_size | 35| 58.7| | ---
Click to see the interactive network visualisation of calls between objects in package
goodpractice
and other checks#### 3a. Continuous Integration Badges [![R-CMD-check.yaml](https://github.com/ropensci/comtradr/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/ropensci/comtradr/actions) **GitHub Workflow Results** | id|name |conclusion |sha | run_number|date | |----------:|:-------------|:----------|:------|----------:|:----------| | 6905436127|pkgcheck |success |c44b45 | 17|2023-11-17 | | 6905436125|R-CMD-check |success |c44b45 | 64|2023-11-17 | | 6905436124|test-coverage |success |c44b45 | 44|2023-11-17 | --- #### 3b. `goodpractice` results #### `R CMD check` with [rcmdcheck](https://r-lib.github.io/rcmdcheck/) R CMD check generated the following note: 1. checking data for non-ASCII characters ... NOTE Note: found 11 marked UTF-8 strings R CMD check generated the following check_fail: 1. rcmdcheck_non_ascii_characters_in_data #### Test coverage with [covr](https://covr.r-lib.org/) Package coverage: 82.15 #### Cyclocomplexity with [cyclocomp](https://github.com/MangoTheCat/cyclocomp) No functions have cyclocomplexity >= 15 #### Static code analyses with [lintr](https://github.com/jimhester/lintr) [lintr](https://github.com/jimhester/lintr) found the following 140 potential issues: message | number of times --- | --- Avoid library() and require() calls in packages | 14 Lines should not be more than 80 characters. | 126
|package |version | |:--------|:--------| |pkgstats |0.1.3.9 | |pkgcheck |0.1.2.11 |
This package is in top shape and may be passed on to a handling editor
The package includes all the following forms of documentation:
URL
, BugReports
and Maintainer
(which may be autogenerated via Authors@R
).Estimated hours spent reviewing: 4
First, my deep apologies on the delay of my review. Second, thank you for putting together such a clear and carefully written package. It was a pleasure to review. My comments are mostly limited to a set of issues that I ran into in the documentation and functionality of the World
and All
and all
values for some ct_get_data()
parameters.
I have included modifications to the documentation in a PR that I will make a request for once this is submitted, but these are just suggestions of course.
No issues with installation.
devtools::check()
and devtools::test()
present no issues.
partner
parameter in ct_get_data()
defaults to "World" as noted in the documentation, but it looks like "all" is an option as well that will successfully get all partners. It would be worth updating the documentation to reflect "all" as an option since it isn't in the comtradr::country_codes$iso_3
. The reporter
parameter might be clarified similarly. In my PR I've modified the documentation for these parameters with a suggestion of "Partner ISO3 code(s), all
, or NULL
, ..." .comtradr
, which is great. Only the documentation in the vignette uses "All" instead of "all", which is what the function ct_check_params()
requires here.All
but it is implemented in the lower case and for all of reporter
, partner
, and partner2
parameters. partner
as NULL
will result in "all" being used but currently it results in "World" being used. This 'total' default is true for the mode_of_transport
and customs_code
parameters as well, but the vignette suggests it should return all values, not the total.ct_get_data()
, mode_of_transport
documentation does not make it clear that the id
is needed and not the text. For example, for
Airit must be
ct_get_data(mode_of_transport = 1000, ...), not
ct_get_data(mode_of_transport = Air, ...)`. Making the parameter documentation "Mode of Transport ID code..." would clear that up.partner
, the mode_of_transport
and customs_code
parameters in ct_get_data()
do not take all
as a valid parameter value. Perhaps this could be added?@ropensci-review-bot submit review https://github.com/ropensci/software-review/issues/613#issuecomment-1848610962 time 4
Logged review for potterzot (hours: 4)
@ropensci-review-bot submit review https://github.com/ropensci/software-review/issues/613#issuecomment-1796296108 time 4
Logged review for ernestguevarra (hours: 4)
Thanks for your review @potterzot! @datapumpernickel, now both reviews are in. Let us know when you've implemented and/or have a response to the requested changes
Hi @noamross @ernestguevarra and @potterzot,
thanks to all again for your valuable time and the thorough review! Your comments, additions and tests have already been of great help!
I have made an issue with all the requested changes and am tracking which ones I have already implemented there: https://github.com/ropensci/comtradr/issues/69
Some of the larger documentation tasks (new vignettes for transitioning from older iteration and on how to loop for bulk data) might take me a little longer, but some I might get done in the next week. I will report back once I am done with all the changes. Most I have no further comments on, as I find them sensible and helpful!
@ernestguevarra Would you mind submitting that pull request you wanted to do? It would also be helpful to me, if you have any example code that led you to get 500 Errors! Thanks!
There is two issues, where I think I could need some more input:
Currently, I use the ...
to allow for users to add arbitrary parameters to the URL call in order to have a certain flexibility, should the API add new parameters that I have not implemented yet. These are passed on, as is without any checking. Using ...
has been very easy to implement, but I had at least one user, who had the following issue:
Upon specifying a call, like:
comtradr::ct_get_data(reporter = "DEU",
patner ='ARG',
start_date = 2020,
end_date = 2020)
The API did not work as expected. Instead of returning partner Argentina, it returned partner "World" without an error, because the mistyped patner
argument was just catched by the ...
. So this seems to be very error prone.
I am now tending towards just excluding the ...
, because the main functionality is covered and should there be new parameters that are really urgent, I am fairly confident we can implement them relatively quickly.
However, do you have any other idea on how to allow the passing on of arbitrary parameters to the function? e.g. an argument that takes a named list? I am a bit our of my depth here.
As has been noted by @potterzot there is some confusion around this. Here is whats happening:
comtradr::ct_get_data(reporter = "DEU",
start_date = 2020,
end_date = 2020)
--> This returns Germanys trade with "World", as the argument is unspecified, it defaults to "World".
comtradr::ct_get_data(reporter = "DEU",
partner = NULL,
start_date = 2020,
end_date = 2020)
--> This returns Germanys trade with all possible partners, notably, including World!
comtradr::ct_get_data(reporter = "DEU",
partner ='all',
start_date = 2020,
end_date = 2020)
This returns Germanys trade with all partner countries that are not groups (e.g. ASEAN is not included, World is not included). This is what you would use, if you would want to aggregate the World values yourself.
When I implemented this, it seemed clear to me, but I can now see how this confuses people. I am thinking about changing the default from "World" to NULL, because it is more intuitive in the documentation to think about leaving it in blank as similar to as setting it to NULL. However, for the parameters custom_code, and mode_of_transport, it is quite confusing, when all of a sudden for one trade relationship you get a bunch of weird entries with different modes of transport and custom codes, when in 99% of all cases, what you want is only the total (Data is realtively new in comtrade and not that reliable yet either). Hence the default "total".
Do you have any ideas/suggestions on how to best solve this?
Maybe I should just include it as clear as here in the documentation somewhere?
Thank you again for your time, really highly appreciated!
@noamross One more question for you: Currently, since this package has been reviewed in the past, there is two people who have reviewed the package in the past in the Contributors file. I would just add the two new reviewers to this list, right? We can also keep the old reviewers in, although technically, they have not reviewed the package as is, because we virtually re-wrote 90% of the functions. Not sure what is the policy here. If none, I would just keep everybody in the list. Thanks!
@datapumpernickel This a new one for us, but I would say yes, keep the previous reviewers, just as you retain the original author. Their contribution to the lineage of the package remains, even if much of the code they reviewed does not.
Q1: I think an extra_params
argument that takes a list of additional parameters would serve. I think having this option is useful from the perspective of reducing future maintainer burden.
Q2: I agree that NULL is a common default value. If NULL is an option, I would have it be the default, and not an uncommonly used option. I would suggest (a) the default return smaller data, and (b) that all the options be equivalent. So in this case my suggestion is to retain 'world' as default and remove NULL as an option. Instead come up with another descriptive named option like 'everything'
that does the equivalent of NULL.
@datapumpernickel just a couple of thoughts on Q1 and Q2:
partner
. In the rnassqs
package, we handle it by checking all parameters for validity before querying. In this way a misspelled parameter will return an error that essentially says "Error: patner
is not a valid parameter."#' Expand a list handling null list as well
expand_list <- function(...){
x <- list(...)
if(length(x) == 0) { NULL } else { if(is.null(names(x))) x[[1]] else x }
}
#' Check validity of functions
parameter_is_valid <- function(param) {
valid_params <- toupper(c(nassqs_params(), "param"))
param2 <- gsub("__LE|__LT|__GT|__GE|__LIKE|__NOT_LIKE|__NE", "", toupper(param))
if(!param2 %in% valid_params) {
stop("Parameter '", param, "' is not a valid parameter. Use `nassqs_params()`
for a list of valid parameters.")
}
return(invisible())
}
#' Make the query, checking parameter validity beforehand
nassqs_GET <- function(...,
api_path = c("api_GET",
"get_param_values",
"get_counts"),
progress_bar = TRUE,
format = c("csv", "json", "xml")) {
# match args
api_path <- match.arg(api_path)
format <- match.arg(format)
params <- expand_list(...)
# Check that names of the parameters are in the valid parameter list
for(x in names(params)) { parameter_is_valid(x) }
...
}
Here are links to the above functions to see the code in entirety: expand_list(), parameter_is_valid(), nassqs_GET
There is a function that helps with this that will just return all valid parameters and their description to help with usability as well.
total
and to check in the function to return an error if the value is NULL
. total
would return the totals, all
would return all subcategories but not the total, and NULL
would return an error; NULL
, which returns the total by default, and allow an all
option that would return all valuesIn general, as a user I prefer a default behavior that fetches less data, so would not prefer the option of having a default that fetches all subcategory values.
Hi @datapumpernickel. rOpenSci Community Manager here. If you want to join our Slack, I can send an invitation for you. Please let me know. You can write me to yabellini@ropensci.org
Dear @ernestguevarra, @potterzot and @noamross,
I have now implemented all changes to the best of my abilities. Thank you also for the helpful feedback on my additional questions. This has introduced a minor breaking change, as now all
is not available anymore. But since the package is not on CRAN yet and the change is beneficial, I think that is ok.
I have basically kept the Defaults as restrictive as is, to not return too much data. There is no more NULL option, instead you can pass everything
and in the case of reporter
and partner
you can also pass all_countries
(formerly all
), which makes it clear that not every parameter, but only countries are returned.
I have also gotten rid of ...
in favor of extra_params
which now makes the package more robust for misspelled arguments - thanks also for your input potterzot!
Implemented changes are listed here: https://github.com/ropensci/comtradr/issues/69
As a next step, I would publish to CRAN, I suppose.
All the best and thank you again to everyone for the helpful comments and your time!
@ropensci-review-bot submit response https://github.com/ropensci/software-review/issues/613#issuecomment-1868286271
Dear @yabellini,
I am already on the Slack-channel, but thanks for the invite!
@datapumpernickel: please post your response with @ropensci-review-bot submit response <url to issue comment>
if you haven't done so already (this is an automatic reminder).
Here's the author guide for response. https://devguide.ropensci.org/authors-guide.html
Thanks you for your follow-up, @datapumpernickel! @potterzot and @ernestguevarra, please let us know if you agree the changes address the comments in your review.
And Happy Holidays to all!
@noamross I agree that changes address comments and think the package is ready. Thanks!
@datapumpernickel sorry for dropping the ball on this.
Thank you for all the progress achieved since my review. I agree with the changes made in relation to my review and also those that have been made in response to @potterzot and @noamross feedback.
I've worked with the latest version a couple of days ago and specifically tried the functions to which changes have been made. All looks good and I agree that the package is ready!
Really appreciate your efforts into this. I think a lot of people will benefit from this!
@ropensci-review-bot approve comtradr
Approved! Thanks @datapumpernickel for submitting and @ernestguevarra, @potterzot for your reviews! :grin:
To-dos:
@ropensci-review-bot invite me to ropensci/<package-name>
which will re-send an invitation.@ropensci-review-bot finalize transfer of <package-name>
where <package-name>
is the repo/package name. This will give you admin access back.pkgdown
website and are ok relying only on rOpenSci central docs building and branding,
pkgdown
website with a redirecting pagehttps://docs.ropensci.org/package_name
URL
field alongside the link to the GitHub repository, e.g.: URL: https://docs.ropensci.org/foobar, https://github.com/ropensci/foobar
codemetar::write_codemeta()
in the root of your package.install.packages("<package-name>", repos = "https://ropensci.r-universe.dev")
thanks to R-universe.Should you want to acknowledge your reviewers in your package DESCRIPTION, you can do so by making them "rev"
-type contributors in the Authors@R
field (with their consent).
Welcome aboard! We'd love to host a post about your package - either a short introduction to it with an example for a technical audience or a longer post with some narrative about its development or something you learned, and an example of its use for a broader readership. If you are interested, consult the blog guide, and tag @ropensci/blog-editors in your reply. They will get in touch about timing and can answer any questions.
We maintain an online book with our best practice and tips, this chapter starts the 3d section that's about guidance for after onboarding (with advice on releases, package marketing, GitHub grooming); the guide also feature CRAN gotchas. Please tell us what could be improved.
Last but not least, you can volunteer as a reviewer via filling a short form.
Thank you @datapumpernickel for bringing this package to this point and putting it through review after taking over maintenance! Of course most of the steps above don't apply for a package already in our suite.
I do think that the story of this package and review would make a great blog post. Taking over maintenance is a big thing to tackle and the more stories we have of it the better we are able to guide people through it! Please let us know if you are interested in writing one.
Hi all!
Really excited to see this package fully approved back in the fold of rOpenSci! Thanks to everybody for the input and your time. I am in general interested in writing a blog article, but I am not sure when I fill find the time to do so.
Currently I am waiting for some feedback on the caching ability I am implementing. Then I would prioritize getting the package on CRAN. As soon as that is done I will be in touch about writing up the process! :)
Date accepted: 2024-01-09 Submitting Author Name: Paul Bochtler Submitting Author Github Handle: !--author1-->@datapumpernickel<!--end-author1-- Repository: https://github.com/ropensci/comtradr Version submitted: 0.4.0 (not yet released) Submission type: Standard Editor: !--editor-->@noamross<!--end-editor-- Reviewers: @ernestguevarra, @potterzot
Archive: TBD Version accepted: TBD Language: en
Scope
Please indicate which category or categories from our package fit policies this package falls under: (Please check an appropriate box below. If you are unsure, we suggest you make a pre-submission inquiry.):
Explain how and why the package falls under these categories (briefly, 1-2 sentences): The package leverages httr2 to extract trade data from the UN Comtrade API, which is one of the standard sources for trade data world wide. See here for more info on the API.
Who is the target audience and what are scientific applications of this package? The package can be used for econometric research into the trade relations of countries, dependencies between countries and many other use cases for trade data. The target audience are students and scientists, as well as practitioneers in the field of econometrics, political science among others.
Are there other R packages that accomplish the same thing? If so, how does yours differ or meet our criteria for best-in-category?
As far as I am aware, there is no other package that wraps the UN Comtrade API.
(If applicable) Does your package comply with our guidance around Ethics, Data Privacy and Human Subjects Research? Not applicable.
If you made a pre-submission inquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted.
The package has previouslz been reviewed by rOpenSci and is already part of the rOpenSci suite. However, since the API underwent fundamental changes, so did most of the functions of the package. Hence, we asked, whether we could get another review. This inquiry was answered positively here: https://github.com/ropensci/software-review-meta/issues/100
pkgcheck
items which your package is unable to pass. As far as I am aware, the package currently passes all the pkgcheck checks.Technical checks
Confirm each of the following by checking the box.
This package:
Publication options
[x] Do you intend for this package to go on CRAN?
[ ] Do you intend for this package to go on Bioconductor?
[ ] Do you wish to submit an Applications Article about your package to Methods in Ecology and Evolution? If so:
MEE Options
- [ ] The package is novel and will be of interest to the broad readership of the journal. - [ ] The manuscript describing the package is no longer than 3000 words. - [ ] You intend to archive the code for the package in a long-term repository which meets the requirements of the journal (see [MEE's Policy on Publishing Code](http://besjournals.onlinelibrary.wiley.com/hub/journal/10.1111/(ISSN)2041-210X/journal-resources/policy-on-publishing-code.html)) - (*Scope: Do consider MEE's [Aims and Scope](http://besjournals.onlinelibrary.wiley.com/hub/journal/10.1111/(ISSN)2041-210X/aims-and-scope/read-full-aims-and-scope.html) for your manuscript. We make no guarantee that your manuscript will be within MEE scope.*) - (*Although not required, we strongly recommend having a full manuscript prepared when you submit here.*) - (*Please do not submit your package separately to Methods in Ecology and Evolution*)Code of conduct