Closed editorialbot closed 4 months ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.90 T=0.02 s (1601.5 files/s, 205912.7 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
R 16 508 522 1562
Rmd 6 179 291 474
TeX 1 37 0 275
CSV 4 0 0 272
Markdown 4 48 0 208
YAML 4 19 9 96
-------------------------------------------------------------------------------
SUM: 35 791 822 2887
-------------------------------------------------------------------------------
Commit count by author:
158 Heli Xu
2 heli-xu
Paper file info:
π Wordcount for paper.md
is 834
β
The paper includes a Statement of need
section
License info:
β
License found: MIT License
(Valid open source OSI approved license)
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/j.jamcollsurg.2020.11.024 is OK
- 10.1016/j.surg.2021.06.001 is OK
- 10.1016/j.ijdrr.2020.101517 is OK
- 10.1016/j.cpcardiol.2022.101182 is OK
- 10.1016/j.amepre.2020.06.006 is OK
- 10.1007/s11606-020-05882-3 is OK
- 10.1111/puar.13264 is OK
- 10.7326/M20-3936 is OK
- 10.1136/bmjopen-2020-048086 is OK
- 10.1016/j.lana.2022.100220 is OK
- 10.1093/aje/kwac076 is OK
- 10.1016/j.socscimed.2022.115307 is OK
MISSING DOIs
- No DOI given, and none found for title: CDC/ATSDR SVI data and documentation download
- No DOI given, and none found for title: tidycensus: Load US Census Boundary and Attribute ...
- No DOI given, and none found for title: Measuring community vulnerability to natural and a...
- No DOI given, and none found for title: dplyr: A Grammar of Data Manipulation
- No DOI given, and none found for title: purrr: Functional Programming Tools
- No DOI given, and none found for title: stringr: Simple, Consistent Wrappers for Common St...
- No DOI given, and none found for title: tidyr: Tidy Messy Data
- No DOI given, and none found for title: tidyselect: Select from a Set of Strings
- No DOI given, and none found for title: rlang: Functions for Base Types and Core R and βTi...
- No DOI given, and none found for title: cli: Helpers for Developing Command Line Interface...
INVALID DOIs
- https://doi.org/10.2202/1547-7355.1792 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1038/s41591-021-01379-6 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1038/s41467-023-38084-6 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1016/j.jacadv.2023.100577 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1161/CIRCULATIONAHA.121.054516 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1016/j.hjdsi.2021.100611 is INVALID because of 'https://doi.org/' prefix
- http://dx.doi.org/10.15585/mmwr.mm7012e1 is INVALID because of 'https://doi.org/' prefix
π @mashrur-ayon can you please update us on how it's going with your review?
π @epiben, @mashrur-ayon, can you please update us on how it's going with your reviews?
@editorialbot commands
Hello @mashrur-ayon, here are the things you can ask me to do:
# List all available commands
@editorialbot commands
# Get a list of all editors's GitHub handles
@editorialbot list editors
# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist
# Set a value for branch
@editorialbot set joss-paper as branch
# Run checks and provide information on the repository and the paper file
@editorialbot check repository
# Check the references of the paper for missing DOIs
@editorialbot check references
# Generates the pdf paper
@editorialbot generate pdf
# Generates a LaTeX preprint file
@editorialbot generate preprint
# Get a link to the complete list of reviewers
@editorialbot list reviewers
@mashrur-ayon, I notice you generated your review checklist. Please let me know if you have any questions about how to conduct the review.
@osorensen: First off, I'm really sorry for the delay in doing this review; thing have been hectic on my end recently. Generally, I think this is a good submission, but would encourage to do the following before acceptance:
styler:::style_active_pkg()
covr::package_coverage()
:
findSVI Coverage: 32.24%
R/find_svi.R: 0.00%
R/get_census_data.R: 0.00%
R/get_svi.R: 92.91%
Thank you @epiben for reviewing and providing suggestions!
- Enforce the same formatting throughout the source code, e.g., through styler:::style_active_pkg()
- Add a section to the README specifying how to contribute
I've modified the code style and added a brief section about contributing in README. https://github.com/heli-xu/findSVI/issues/19
- Extending the test to reach a greater coverage than the current 32%.
I'm afraid that it would go beyond the testthat
workflow that we are familiar with, since the data retrieval part of the package relies on census API (using tidycensus::get_acs()
). The current testing only covers the SVI calculation part (get_svi()
) using census data stored in the package. We would appreciate any suggestions on testing strategies with API packages, and perhaps we, or someone from the user community, can design a test suite in a future version of the package.
Please let me know if you have further feedback or recommendations. Thanks again!
That's a valid point. To work around that, I'd probably create your own wrapper around tidycensus::get_acs()
and call that in your code. So, essentially creating a new file get_acs_.R
with something like this:
#' Wrapper of tidycensus::get_acs() to enable mocking during testing
#' @keywords internal
get_acs_ <- function(...) {
tidycensus::get_acs(...)
}
Then, you could replace all your calls to tidycensus::get_acs
with get_acs_
and use mocking to pretend you're calling the API during testing: https://testthat.r-lib.org/reference/local_mocked_bindings.html. I know it's a bit of extra work, but I think it would be very beneficial--and a good learning opportunity.
π @mashrur-ayon, could you please update us on how it's going with your review?
@mashrur-ayon, could you please update us on how it's going with your review?
π @Coral-orca, @zby0327, would any of you be willing to review this submission for JOSS? We carry out our checklist-driven reviews here in GitHub issues and follow these guidelines: https://joss.readthedocs.io/en/latest/review_criteria.html
@heli-xu, I'm not able to get in touch with the second reviewer, and thus have to try to find one or more new reviewers. Sorry about the delay this will cause.
Hi @osorensen, I am happy to review this submission!
Hi @osorensen , I am willing to review this paper!
@osorensen Thanks for following up. I understand and appreciate the efforts to move the reviewing process forward.
Fantastic @nilabhrardas and @zby0327!
@editorialbot add @zby0327 as reviewer
@zby0327 added to the reviewers list!
@editorialbot add @nilabhrardas as reviewer
@nilabhrardas added to the reviewers list!
@nilabhrardas and @zby0327, you find the review instructions at the top of this thread. Please reach out to me if you have any questions. I would appreciate if you could start your reviews within two weeks.
@editorialbot remind @zby0327 in two weeks
Reminder set for @zby0327 in two weeks
@editorialbot remind @nilabhrardas in two weeks
Reminder set for @nilabhrardas in two weeks
Hi @osorensen @heli-xu ,
I have had the opportunity to review your R package and would like to share my thoughts and feedback. First and foremost, I tested the package with several datasets and found that it provides reasonable results. The documentation is clear and well-written, ensuring that users can easily understand and implement the package's functionalities. The output format of the result variables is intuitive and convenient for viewing the results. However, I found the variable names somewhat unclear, as I am not familiar with SVI-related concepts and need searching to clarify what the variables refer to. It would be beneficial to include explanations for each result variable, either within the documentation or through reference links. That being said, I do have some concerns regarding the package. Firstly, its functionality is quite limited, catering only to the retrieval of specific website data and calculating a few predefined results (i.e., SVI). The data from the website is restricted to the United States, and if users have data from other countries, they must format it accordingly to use it with the package. The packageβs scope is restricted to calculating SVI, which limits its usability. Additionally, some data is hard-coded within the package, even though it is available on the website. It would be more efficient to integrate the data retrieval functionality directly into the package and eliminate the hard-coded data storage. Moreover, the dataset is not open-source and requires an API key for access. While it appears that the package can function with limited usage without an API key, there are restrictions. I recommend clarifying the data source's availability and potential ethical issues, given that this is an open-access journal and the data involves human subjects. Lastly, there is a strong correlation between this package and the tidycensus package, as many functions from tidycensus are called within your package. It may be more beneficial to incorporate your new functionalities directly into the tidycensus package, thereby enhancing its capabilities without the need for a separate package. Thank you for your attention to these points. I hope my feedback proves helpful for further development of your package and helps the reviewer to do the decision.
Hi @zby0327 , thank you very much for taking the time to review the package and provide feedback! I will try to address your concerns and questions here.
However, I found the variable names somewhat unclear.
In the documentation of get_svi()
and find_svi()
, the link to CDC/SVI documentation is provided as a reference for the description of the variable names.
Firstly, its functionality is quite limited.
It's true that the functionality of the package is very specific to calculating SVI. However, our main goal is to make it easier to calculate SVI at different geographic levels (especially those that are not available in the CDC SVI database). While traditional workflow involves downloading census data, doing percentile ranks, summing ranks, and doing percentile ranks for the sum, the findSVI workflow can achieve this in one line of code. We and some colleagues have tried both approaches in our research over the past few years, and found the latter much more streamlined and reproducible.
As the SVI is developed by CDC/ATSDR in the US, it is most widely used in studying public health issues in the US, though there have also been efforts to adapt it to other countries. For data from other countries, we do ask users to format it to supply into the function to calculate SVI. However, it might be interesting to incorporate census data retrieval for other countries, like Canada (perhaps via cancensus package), in the future.
Additionally, some data is hard-coded within the package, even though it is available on the website.
The lists of variables and calculation tables are indeed available on the website in the CDC/SVI documentation, but not in a format that's easily machine-readable. During the development of the package, it took quite some time to extract the information from the pdf tables and check census variables for each year (especially pre-2016 with the previous census data portal). We do agree about keeping the package lightweight, but in this case it's an effective solution for improving performance and facilitating future maintenance.
I recommend clarifying the data source's availability and potential ethical issues, given that this is an open-access journal and the data involves human subjects.
Census (ACS) data used in the package is anonymized and publicly available; so in this case there is no individual or recorded level data involved. While it is true that an API key is required for using the data retrieval function in the package, any user can request an API key for free.
Lastly, there is a strong correlation between this package and the tidycensus package.
It's true that the data retrieval function in the package is done via tidycensus::get_acs()
, but I'm inclined to think that findSVI and tidycensus have very different focus. Tidycensus provides a wonderful interface to obtain a variety of Census data (ACS, Decennial, etc.), whereas findSVI focuses more on processing a set of ACS variables to obtain SVI. There are other examples of R packages that have built additional functionality on top of the data retrieval functionality of tidycensus (e.g. chloroplethr).
Hope this helps to bring in some of our perspectives and rationales. Please let me know if you have further questions/feedback. Thank you again for your time.
Hi @heli-xu ,
Thank you very much for your detailed and thoughtful response to my review. I appreciate the clarifications and additional insights you provided.
Your package indeed performs well and achieves its intended functionality. The automation of SVI calculation is a significant improvement over traditional workflows, making the process much more streamlined and reproducible, especially for researchers focused on public health issues in the US.
However, I still have one remaining concern regarding the scope of your package. Specifically, I wonder if providing only the SVI calculation might be somewhat limited. While the package offers a highly efficient method for calculating SVI, I am uncertain whether this meets the JOSS publication standards, particularly the criteria for substantial scholarly effort as outlined here: JOSS Review Criteria. If that's not the case, I completely agree this package should be accepted by JOSS. @osorensen
Thank you again for your time and consideration. I look forward to your thoughts on this matter.
Hi @heli-xu, I have had the chance to review your package and would like to share my thoughts. Firstly, the functionalities of the package are well-documented, and the vignette is clear and well-written! I was easily able to reproduce the illustrated examples and further test the functions on other datasets. The package does what it claims - streamlines data retrieval and calculates SVI efficiently with the ability to compute SVIs at different geographical levels. (Personally,) I prefer that the focus of the package is very specific, i.e., to retrieve census data and calculate SVIs, rather than including more functions for retrieving/calculating other related/unrelated 'public health-y' demographics. The functionality of the functions is restricted to only the US. However, this is not a substantial drawback of the package since other countries and sovereign states are likely to publish data in different standards or even have restricted access to the required data. The reach of the package might benefit from including functionality to transform/transcribe data from other countries (at least for those with unrestricted data availability) for subsequent SVI calculation. While the steps for generating interactive maps using 'leaflet' has been well-documented, the process could be streamlined further by including native visualisation functionalities within the package. This could be an area to be explored for future development. Overall, I am happy to recommend this paper for publication! Best wishes.
Hi @nilabhrardas, thank you very much for taking the time to review the package and providing feedback! We're happy to hear that you found the documentation useful and the package effective overall.
The reach of the package might benefit from including functionality to transform/transcribe data from other countries (at least for those with unrestricted data availability) for subsequent SVI calculation.
That's a good point, and it's true that census data are not always publicly available. As we discussed with @zby0327, it would be an interesting next step to think about incorporating other countries' census data (perhaps leveraging packages like cancensus and geobr).
the process could be streamlined further by including native visualisation functionalities within the package.
That's definitely another area to explore. It reminds me that we often create functions (via tmap, ggplot2 or leaflet) to make it easier to reproduce maps with different data in our SVI analyses. Perhaps it would be helpful to incorporate some visualisation tools in the package! We'll keep it in mind.
Again, thank you very much for your time and support!
@editorialbot generate pdf
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/j.jamcollsurg.2020.11.024 is OK
- 10.1016/j.surg.2021.06.001 is OK
- 10.1016/j.ijdrr.2020.101517 is OK
- 10.1016/j.cpcardiol.2022.101182 is OK
- 10.1016/j.amepre.2020.06.006 is OK
- 10.1007/s11606-020-05882-3 is OK
- 10.1111/puar.13264 is OK
- 10.7326/M20-3936 is OK
- 10.1136/bmjopen-2020-048086 is OK
- 10.1016/j.lana.2022.100220 is OK
- 10.1093/aje/kwac076 is OK
- 10.1016/j.socscimed.2022.115307 is OK
MISSING DOIs
- No DOI given, and none found for title: CDC/ATSDR SVI data and documentation download
- No DOI given, and none found for title: tidycensus: Load US Census Boundary and Attribute ...
- No DOI given, and none found for title: Measuring community vulnerability to natural and a...
- No DOI given, and none found for title: dplyr: A Grammar of Data Manipulation
- 10.32614/cran.package.purrr may be a valid DOI for title: purrr: Functional Programming Tools
- 10.32614/cran.package.stringr may be a valid DOI for title: stringr: Simple, Consistent Wrappers for Common St...
- 10.32614/cran.package.tidyr may be a valid DOI for title: tidyr: Tidy Messy Data
- 10.32614/cran.package.tidyselect may be a valid DOI for title: tidyselect: Select from a Set of Strings
- 10.32614/cran.package.rlang may be a valid DOI for title: rlang: Functions for Base Types and Core R and βTi...
- 10.32614/cran.package.cli may be a valid DOI for title: cli: Helpers for Developing Command Line Interface...
INVALID DOIs
- https://doi.org/10.2202/1547-7355.1792 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1038/s41591-021-01379-6 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1038/s41467-023-38084-6 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1016/j.jacadv.2023.100577 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1161/CIRCULATIONAHA.121.054516 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1016/j.hjdsi.2021.100611 is INVALID because of 'https://doi.org/' prefix
- http://dx.doi.org/10.15585/mmwr.mm7012e1 is INVALID because of 'https://doi.org/' prefix
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Reference check summary (note 'MISSING' DOIs are suggestions that need verification): OK DOIs - 10.1016/j.jamcollsurg.2020.11.024 is OK - 10.1016/j.surg.2021.06.001 is OK - 10.1016/j.ijdrr.2020.101517 is OK - 10.1016/j.cpcardiol.2022.101182 is OK - 10.1016/j.amepre.2020.06.006 is OK - 10.1007/s11606-020-05882-3 is OK - 10.1111/puar.13264 is OK - 10.7326/M20-3936 is OK - 10.1136/bmjopen-2020-048086 is OK - 10.1016/j.lana.2022.100220 is OK - 10.1093/aje/kwac076 is OK - 10.1016/j.socscimed.2022.115307 is OK MISSING DOIs - No DOI given, and none found for title: CDC/ATSDR SVI data and documentation download - No DOI given, and none found for title: tidycensus: Load US Census Boundary and Attribute ... - No DOI given, and none found for title: Measuring community vulnerability to natural and a... - No DOI given, and none found for title: dplyr: A Grammar of Data Manipulation - 10.32614/cran.package.purrr may be a valid DOI for title: purrr: Functional Programming Tools - 10.32614/cran.package.stringr may be a valid DOI for title: stringr: Simple, Consistent Wrappers for Common St... - 10.32614/cran.package.tidyr may be a valid DOI for title: tidyr: Tidy Messy Data - 10.32614/cran.package.tidyselect may be a valid DOI for title: tidyselect: Select from a Set of Strings - 10.32614/cran.package.rlang may be a valid DOI for title: rlang: Functions for Base Types and Core R and βTi... - 10.32614/cran.package.cli may be a valid DOI for title: cli: Helpers for Developing Command Line Interface... INVALID DOIs - https://doi.org/10.2202/1547-7355.1792 is INVALID because of 'https://doi.org/' prefix - https://doi.org/10.1038/s41591-021-01379-6 is INVALID because of 'https://doi.org/' prefix - https://doi.org/10.1038/s41467-023-38084-6 is INVALID because of 'https://doi.org/' prefix - https://doi.org/10.1016/j.jacadv.2023.100577 is INVALID because of 'https://doi.org/' prefix - https://doi.org/10.1161/CIRCULATIONAHA.121.054516 is INVALID because of 'https://doi.org/' prefix - https://doi.org/10.1016/j.hjdsi.2021.100611 is INVALID because of 'https://doi.org/' prefix - http://dx.doi.org/10.15585/mmwr.mm7012e1 is INVALID because of 'https://doi.org/' prefix
@heli-xu, could you please go through this list, add missing DOIs and remove 'https://doi.org/' prefixes from your BibTeX source file?
You can test here at any time with @editorialbot check references
.
@editorialbot check references
I'm sorry human, I don't understand that. You can see what commands I support by typing:
@editorialbot commands
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.2202/1547-7355.1792 is OK
- 10.32614/CRAN.package.tidycensus is OK
- 10.1111/disa.12198 is OK
- 10.1038/s41591-021-01379-6 is OK
- 10.1016/j.jamcollsurg.2020.11.024 is OK
- 10.1016/j.surg.2021.06.001 is OK
- 10.1038/s41467-023-38084-6 is OK
- 10.1016/j.ijdrr.2020.101517 is OK
- 10.1016/j.jacadv.2023.100577 is OK
- 10.1016/j.cpcardiol.2022.101182 is OK
- 10.1161/CIRCULATIONAHA.121.054516 is OK
- 10.1016/j.hjdsi.2021.100611 is OK
- 10.15585/mmwr.mm7012e1 is OK
- 10.1016/j.amepre.2020.06.006 is OK
- 10.1007/s11606-020-05882-3 is OK
- 10.1111/puar.13264 is OK
- 10.7326/M20-3936 is OK
- 10.1136/bmjopen-2020-048086 is OK
- 10.1016/j.lana.2022.100220 is OK
- 10.1093/aje/kwac076 is OK
- 10.1016/j.socscimed.2022.115307 is OK
- 10.32614/CRAN.package.dplyr is OK
- 10.32614/CRAN.package.purrr is OK
- 10.32614/CRAN.package.stringr is OK
- 10.32614/CRAN.package.tidyr is OK
- 10.32614/CRAN.package.tidyselect is OK
- 10.32614/CRAN.package.rlang is OK
- 10.32614/CRAN.package.cli is OK
MISSING DOIs
- None
INVALID DOIs
- None
Hi @osorensen , we've fixed the DOIs (the two references without DOIs were replaced with a hyperlink in text and another relevant paper, respectively. For details https://github.com/heli-xu/findSVI/pull/23). Please let me know if additional changes are needed. Thanks!
Submitting author: !--author-handle-->@heli-xu<!--end-author-handle-- (Heli Xu) Repository: https://github.com/heli-xu/findSVI Branch with paper.md (empty if default branch): joss-paper Version: v0.1.3 Editor: !--editor-->@osorensen<!--end-editor-- Reviewers: @epiben, @zby0327, @nilabhrardas Archive: 10.5281/zenodo.12611878
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@epiben, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @osorensen know.
β¨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest β¨
Checklists
π Checklist for @epiben
π Checklist for @zby0327
π Checklist for @nilabhrardas